Takeaways from Two Days Spent with AT&T

AT&T hosted two days of meetings with industry analysts last week, covering a broad array of topics across its consumer and enterprise businesses. Having attended many such tech industry analyst meetings over the years, I found the discussions last week to be unusually wide-ranging and forthright. AT&T has made some pretty significant bets over the past 2-3 years — acquiring DirecTV; investing heavily in the Mexico market; moving aggressively to re-architect its network; and increasing its commitment to building out fiber, among other initiatives. Although Wall Street might be unexcited about big telco stock prospects in the near term, I came away with a fairly positive impression of AT&T’s long-term direction.

What follows are some big picture takeaways across some of the major business segments.

DirecTV
When AT&T announced its plan to acquire DirecTV in 2014, I was skeptical. It was clear then, and it remains so now, that traditional pay TV, a la big bundle, is a flat=to-declining business. Rising content fees have trimmed margins. Having spent time speaking with AT&T executives in recent months, I have started to warm to the deal. First, AT&T has moved quickly to leverage its two largest (and not very fast growing) businesses as an important customer acquisition and retention tool. It’s a believer in the bundle, offering healthy incentives to attract the 15 million DTV customers to who don’t have its wireless service and the 21 million wireless customers who don’t have DTV. This has started to show some positive results. Second, in the hyper-competitive wireless market, where the deltas on network quality and price have narrowed in recent years, content and TV Everywhere could be an important differentiator, especially given Verizon’s recent moves with AOL and Yahoo, and Comcast’s potential entry into the wireless business. AT&T has also renegotiated a lot of its content deals so they are technology agnostic.

The next year will be critical, however. Within the next several weeks, AT&T will be launching DirectTV Now, its skinny, over-the-air bundle aimed at Millennials. It is premature to speculate on the prospects for Now, since AT&T did not reveal the content lineup. The skinny bundle market is becoming a crowded space and successes so far have been modest. AT&T is also spending big sums to revamp the back office operations of DTV, improving its somewhat dated looking user interface, and steadily rolling out more contemporary set-top boxes. Part of the grander plan is to use big data to deliver more relevant, targeted advertising to DTV subscribers. AT&T is firing on a lot of cylinders here – much lies in the execution.

Mexico
AT&T has moved quickly in Mexico since it acquired Iusacell and NII Holdings in 2015. Mexico under-indexes its peers in terms of wireless penetration, network quality, and level of competition. Deploying LTE to the mass that is Mexico City in barely a year is no small accomplishment. There is no question Mexico is an important growth opportunity in the longer term and offering free roaming to U.S. and Mexico subscribers is a good differentiator. The risk is the significant level of investment required, given Mexico’s still high economic and political beta.

Fiber and AirGig
AT&T has increased its commitment to fiber deployment, which is somewhat of a contrast to the plans of some of its competitors such as Google and Verizon. Part of this plan is a commitment to bring fiber to 12.5 million locations by 2019. But the bigger deal is the plan to deploy more fiber closer to a lot of homes, businesses, and MDUs. Why the evolution in strategy from only a couple of years ago? AT&T claims the costs of deploying fiber have come down significantly, so the economic equation is more favorable. The other catalyst is the need for big fiber backhaul pipes to support the growth of video, AR/VR, and even 5G wireless.

AT&T also provided a few details on its innovative Project AirGig, which uses antennas sitting above (but not on) utility poles to regenerate millimeter waves from station to station, potentially delivering fixed wireless broadband of potentially 1 Gbps speeds. AirGig is to AT&T what the self-driving car is to Google: a ground-breaking, still experimental technology, ten years in the making, that could radically alter the economics of backhaul and small cells. This is a 2020 thing, not a 2017 thing.

Network
AT&T has been at the forefront of the move to a software-defined network, having already virtualized 30% of network functions, with plans to get to 70% by 2018. Why is this important? Network service providers have realized for some time that the monolithic, hardware-centric network approach is not sustainable from the standpoint of capacity, agility, and economics. Most major telcos are on relatively the same page with respect to where their network architecture needs to go. Nevertheless, it is a huge transformation, in terms of approach, what it means to vendor relationships, the skill set required of employees, and ultimately moving to new models of how the network will be monetized.

AT&T was a little more sanguine about 5G. On the one hand, network executives are excited about 5G’s potential to deliver the faster speed and lower latency required for certain devices and applications. On the other hand, there was a dose of realism that 5G is a 2020 and beyond thing, and deployment will be in pockets, with LTE still forming the primary coverage layer for the foreseeable future. I am of the view LTE still has a lot of legs. The LTE Advanced Pro roadmap, plus opportunities in the unlicensed band, has the ability to deliver significant improvements in network performance and capacity over the next few years, with many of the attributes envisioned for 5G.

Mobile
Mobile comprises more than half of AT&T’s total revenues, but the traditional cellular market is showing signs of maturing. This is why both AT&T and Verizon have made some pretty aggressive bets over the past two years to provide them with the source of the next $10+ billion in growth. AT&T’s key strategic pillar is focused on mobile entertainment, leveraging the DirecTV asset to deliver video to subscribers wherever they are and to any device. AT&T says it has the mobile network capacity to accommodate what could be a 7-10x increase in data traffic over the next 3-4 years. One key question is whether it can deliver all that mobile video profitably. Remember, its wireless services business has had pretty impressive margins over the years. Another question is whether AT&T needs unique content in order to deliver a differentiated value proposition. There was some tacit acknowledgment this might be the case, but the strategy around this still appears to be formulating.

IoT is another area of major emphasis and potential. AT&T has built a large organization and has devoted significant resources toward IoT. The company has seen some solid success in the connected car segment. Although AT&T will capture its fair share of the ‘billions of connected devices’, it is still difficult to gauge the dimension and timing of the opportunity and which sectors will drive it. IoT is a different type of business: base hits and maybe a couple of doubles, but few triples or home runs.

One disappointment was on the device side. AT&T did not lay out a compelling vision of the device and wearables roadmap, other than being the channel for lots of iPhones and Galaxies. I would like to have heard some more forward thinking about how AT&T might deliver a differentiated experience on those devices, about efforts to partner with OEMs to create exciting devices to help AT&T execute on its video and IoT vision, or plans to offer greater vendor diversity.

AT&T has held these industry analyst gatherings for several years running, so it’s an interesting benchmark on the state of the company and of the industry. This year, I find AT&T with a more finely honed strategic direction, focused on becoming a premium mobile entertainment company and evolving its network to deliver on that vision. The next stage will be very focused on execution, and how a transformed telco fares in a broader competitive landscape.

The First Time Google Alerted Me to Leave for an Appointment

The first time Google alerted me to leave for an appointment, it was one of those “Aha!” moments. I thought to myself, “That’s amazing!” as I realized it was smart enough to check my calendar, learn where and at what time my appointment was, calculate the time I needed to leave to arrive on time, and then sent me an alert to depart.

But how quickly we adapt and take much of this for granted. Now there are numerous apps that make decisions and are trying to think for us. They monitor our email, calendar, and location and, using artificial or some other sort of intelligence, take action we’d normally do on our own.

TripIt will detect when you’re emailed an itinerary from an airline and add the flights to your calendar. Waze will determine when a traffic jam occurs and route you around it. Google will determine when an appointment invite is emailed to you and put it on your calendar. So many apps now have this level of intelligence to create an action without our intervention.

I’m finding, however, as this occurs more frequently, there are numerous errors that occur. I need to intervene more often and make corrections because the assumptions these apps make are usually based on fairly primitive logic. Not every mention in an email is meant to be actionable. Not every itinerary is yours and needs to be scheduled. While the apps try to be helpful, there are many times they get it wrong; perhaps 25% of the time.

As an example, a misaddressed email sent to me contained a receipt for train tickets bought in the UK and TripIt added it to my calendar. I received a Google Now alert to leave for a meeting, even though it was a phone meeting, whose invitation was emailed to me and contained an address. Sometimes I find an action is created twice because more than one app is monitoring the same thing. A plane flight will often show up twice on my calendar, put there by both TripIt and Google. And now the Calendar on iOS 10 is starting to look for events to add automatically. Yes, I can go back to each of the apps’ settings and turn some off or others on. But as more apps do this without asking, it’s time-consuming. Clearly, the simple logic that awed me the first time may not be sufficient without becoming much more intelligent.

It’s not that big a deal right now since we can make these manual corrections and learn which apps work well and which do not. But it shows there are some serious challenges the software has to overcome to work reliably enough so we won’t turn it off and avoid using it all together. Some apps will need to be more careful or less aggressive than others. Some will be very aggressive and make many more assumptions. The burden is on us to filter out what doesn’t work and use what does.

While these new capabilities are designed to simplify our lives, for now, they seem to be requiring more time to manage because of the unintended consequences.

What needs to be done? The software needs to learn our habits and preferences and tailor their actions on an individual basis. They need to query us sometimes before taking action. When Waze sends us on a circuitous local route with numerous turns through a sketchy neighborhood to avoid freeway congestion, it needs to inform us what it’s doing, what to expect, and ask us if we really want to do it.

This is an exciting area with lots of potential to think for us, but we’re just in its infancy and there’s a lot more room for improvement.

Unpacked for Friday October 7th, 2016

Samsung Buys Viv: Let the Battle Begin! – by Carolina Milanesi 

On Wednesday, news broke Samsung had agreed to buy Viv, the AI and assistant system founded by the creators of Siri. Viv will continue to operate as an independent company that will provide services to Samsung. While the acquisition was lead by the Mobile group, you could see how Viv could appeal to other Samsung lines of business such as TV and home appliances.

From a technology perspective Viv differentiates itself in two key ways:

  • It is interconnected, meaning the information flows more freely so the queries can be more complex and conversational — closer to how humans actually speak to one another
  • A software feature called ‘dynamic program generation” which allows Viv to understand what the user wants and to create programs on the fly to handle those queries

Given Viv is not in any commercial products as yet, I will focus my discussion on what this means for the companies and their competitors.

Samsung has been trying to limit its dependence on Google services for a while. Samsung has also been competing with Apple pretty much feature for feature. Given also their SmartThings play, the Viv acquisition announcement should really not come as a surprise to anyone. The timing of the announcement – a day after Google’s Sundar Pichai announced they are moving to an AI first world from a mobile-first one – could not have been better. Even more so after we heard Google Assistant will, at least for now, remain bundled in Google’s new phone Pixel instead of being pushed through any Android phone running Android 7.1.

For Viv, going with the top smartphone vendor in the world also makes sense although I would urge them not to see all the phones Samsung sells as a possible host for a personal assistant.

While everything on paper makes sense, how this acquisition will work remains to be seen. I struggle to see Samsung aiming to compete with the digital assistants that Google, Apple, Amazon, and Microsoft have been talking about — ones that require powerful back ends and ton of data. Samsung, for me, remains first and foremost a hardware company and I would expect them to use Viv to add value to their hardware. Lately, Samsung has been pushing the ecosystem of products they have from phones to tablets to wearables to TVs. Having a strong voice UI that bridges those devices could be a strong push in the “better together” story.

Twitter looks doomed to go it alone or be snapped up by Salesforce – by Jan Dawson

Rumors started some time ago that Twitter was becoming an acquisition target, largely because its share price was so depressed that it was becoming cheap. Since that time, a number of possible suitors have emerged and the share price has risen sharply, only to fall precipitously again as those suitors have fallen away one by one. At the end of trading on Thursday, its share price was more or less identical to a month earlier, after something of a roller-coaster ride.

The big challenge with acquiring Twitter is the synergies are pretty thin for almost any potential acquirer. Each of the companies floated has had some potential synergy but the combination of the rising price and the downsides to an acquisition have made it tough to justify the purchase. For the ad-centric acquirers such as Google, the ad synergies seemed the most obvious drivers, but Twitter is still such a small business it wouldn’t make much difference to scale for any of the big players. Yes, a Google acquisition would help Twitter by giving it far more scale and a bigger audience, but it’s far less clear a Twitter acquisition would help Google.

Media buyers were among the other strong suitors, notably Disney. The rationale here seems to have been more about distribution and leveraging Twitter as a channel for content. The big problem here is any content owner who wants to use Twitter to distribute content today can already do so – as demonstrated by the NFL, Bloomberg, Cheddar TV and others. Perhaps the integration might deepen a little under an acquisition scenario, but not by much. On the other hand, an acquisition by one media company would make every other media company reconsider its use of Twitter as a distribution channel, for fear of playing second fiddle to, and sharing usage data, with a competitor. Disney’s pullout seemed inevitable, but it was also smart.

Salesforce appears to be the last big company still considered to be in the running but here too the rationale is weak. Salesforce ostensibly wants Twitter for its customer service and data possibilities, but it feels like Salesforce could accomplish much of what it might want by integrating with Twitter rather than absorbing it. One wonders whether this is something of a vanity acquisition for Marc Benioff and Salesforce, as much as a rational one, which is always a dangerous starting point for such a thing.

Lastly, and perhaps most importantly, it’s also not clear that any of these acquisitions would be good for Twitter as a product. None of the companies considered to be in the running had a pedigree that suggested it would somehow run Twitter better than it’s currently being run. And that’s what Twitter currently needs more than anything: better strategy and execution. Yes, ad synergies would help on the monetization side but it’s user growth that’s the real problem (and the real reason for the depressed share price). It’s looking more and more likely Twitter may have to simply go it alone at this point, unless Salesforce really does come through. Neither option seems all that appealing, however.

Facebook Pushes VR Forward – by Ben Bajarin
Facebook announced some interesting things around VR at the Oculus developer conference yesterday.

The first was a social VR experience and a set of developer tools around having people’s avatars join them in virtual reality. Mark Zuckerberg did the demo and had two colleagues on stage in VR with him during the demo.

screen-shot-2016-10-06-at-10-17-40-am

There are a number of things going on here that signal some interesting directions for VR. First, notice they aren’t totally in virtual reality. This is basically mixed reality, where you blend the physical and the virtual. There are cameras capturing the room in VR and then showing that video in the headset to the user while placing digital elements over the physical world. The avatars are digital and the room is the physical element. The demo went on to have all three people together underwater and on the moon, all the while interacting with each other while in any number of virtual spaces. Imagine, you are at a sports venue, concert, park, another world, etc., with your friends in VR displayed as their avatars. While you notice the avatars look like cartoons today, that will not be the case in the distant future. At some point, your friends or family will look virtually real in this space as you interact with them from afar.

Second, there are facial expressions happening as well. This gets interesting as the headset will look to capture the person’s facial expressions in real time and display them virtually so you can see their reactions as well. Capturing the body as well will be coming in the future so, not only would you see friends or families facial expressions, but all of their body movements as well.

Every major player in VR hardware or components talks about all the additional facial and body expressions which will be enabled in the future so we can have physical representations of us inserted virtually into any environment we choose.

Secondly, Facebook announced a standalone VR headset that will be better than a Gear VR device but not as good nor tethered to a high-end gaming PC.

screen-shot-2016-10-06-at-10-30-29-am

This product has all the major components integrated into the headset and is code named Santa Cruz. There aren’t many technical details available == like what the processor or GPU specs are — but this is the direction the industry needs to go to take VR mainstream. Wireless, head-mounted VR units are where the market is headed but we need a lot of development still to make it happen.

Overall, we are making great progress in VR and even the mixed reality technology. Mixed reality is what I think has the biggest potential and makes the most sense for us to spend significant amounts of time in spaces that blend the physical and the virtual.

Tech Companies Need to Show, Not Tell, When it Comes to AI

This week saw Google announce a variety of new hardware products. But the main theme of Google’s event wasn’t actually hardware but software: specifically, artificial intelligence. Microsoft’s Ignite event last week also made AI a major focus, with Satya Nadella’s keynote titled “Democratizing AI”. Even Apple, not normally prone to talking about the technology behind its products, has started sprinkling references to machine learning and related concepts into its keynotes. It’s clear AI and machine learning are hot topics but there’s a risk these concepts become the main point. What they do for customers should be the real focus.

Ahead of Google’s event this week, I was approached by a number of reporters wanting to discuss the possible announcements and their relevance. One particular reporter asked about the AI angle and this was my response:

The big thing about AI is no-one outside our circles actually cares. What they do care about is whether their devices and services do useful things, but whether it’s using AI, machine learning, or convolutional neural networks is totally irrelevant. So Google, Microsoft, and Apple can talk up AI at these events all they want, but they’re largely talking to the tech press and a few nerds when they do so, not to end users. The old moviemaking maxim “show, don’t tell” definitely applies here when it comes to real people.

The fact is, big tech companies always have several audiences when making big announcements or holding events and they speak to each of these audiences differently. Among those audiences are:

  • Consumers – the end users and often the purchasers of products and services
  • Enterprise decision makers – sometimes the purchasers of technology that ends up in the hands of end users
  • Media – although this is becoming a little less true over time in the era of live streaming events, still the main conduit through which news about announcements filters through to consumers and enterprise decision makers
  • Financial analysts – those who have to build models and ultimately advise clients on whether to buy or sell stock in the company making the announcements
  • Observers – all the others who are interested in what’s being announced but don’t belong in the categories above – including industry analysts like me, who have to form an opinion about the company professionally, but also hobbyists and tech enthusiasts who are interested in trends, strategies, and so on

The problem with the current AI obsession is all the big tech companies are focusing, to a great extent, on every audience but that first one – consumers. As I said to the reporter, consumers don’t care about AI – many of them probably think it’s a movie starring Jude Law and the kid from Sixth Sense. What consumers care about is the output of all that AI and machine learning – the features on their phones and in their online services which actually make their lives better.

Whether there’s AI behind those features is utterly irrelevant from a user perspective. But the fact your phone seems smarter than mine, or my email service seems better at filtering out spam or suggesting automatic responses than yours, or the digital assistant on my new phone seems to understand what I say better than my old one – that’s what really matters.

Yes, AI, machine learning, and plenty of other technologies are behind many of the advances in the technology we use every day. But that doesn’t mean we as consumers need to know that, any more than I need to know how the plane I’m writing this post on is staying in the air. It may be interesting to a subset of customers but that’s not nearly the same thing.

And that’s where my parting comment in the quote comes in. In the moviemaking industry, one of the key principles is “show – don’t tell”. What that means is there’s a tendency, when providing exposition in a movie, to have a character talk through everything the audience needs to know. But that’s far less effective than showing the audience what it needs to know through characters’ actions rather than their words. That’s harder to do – it requires more creativity and may force you to pare back some of the complexities in your story if you find it impossible to explain it to your audience using action rather than words.

That principle should be taken to heart by tech companies too. If you’re unable to demonstrate the practical benefits of your AI chops to users, your story needs work. And your AI story will be a lot more powerful if you demonstrate it with actions rather than having to describe it with words. Especially in a consumer-facing context, tech companies need to show first, tell second (if they tell at all). Demonstrate the great new features and then (if necessary) talk about how it’s done afterwards.

Now, that’s not to say talking about AI or machine learning is always wrong. This is where we come back to that list of audiences from earlier. For a number of those audiences, investment in new behind-the-scenes technologies is a really important factor in making judgments about your company. Financial analysts, the media who shape narratives about companies, and others may all want to know more about these things. But those selling technology should never make telling that story the primary focus, especially in events which are designed for an end user audience.

Service Providers Still Act Like Utilities

If you ever want to enliven a cocktail party filled with executives from the telecommunications or cable industry, just start talking about dumb pipes. As in, “your service doesn’t offer anything more than a simple connection from my devices to the internet content I want—it’s a dumb pipe.”

Of course, most of you will never have to worry about going through such an awkward social encounter, but if you do—that zinger is bound to get things going.

All kidding aside, the notion that telco carriers and other service providers have provided little more than basic connectivity has been an industry hot button for quite some time. Even now, despite a number of efforts to spice things up, most telcos and cable service providers are seen as companies that provide a very indistinct connectivity service that people only reluctantly pay for.

The primary differentiators for competitive players in this space are price, price and, oh yeah, price, with maybe a bit of coverage or service quality thrown in for good measure. It’s little wonder that many consumers hold these companies in such low esteem—they just don’t see the value in the services beyond basic connectivity. It’s also not surprising that so many people are looking at cord-cutting, cord replacement, or other options that attempt to cut these service providers out of the picture.

But it doesn’t have to be this way.

The amount of data that telco and cable service providers have access to should allow them to generate some very interesting, useful and valuable services that consumers should be happy to pay for. Now, admittedly, there are some serious privacy and regulatory concerns that have to be taken into consideration, but with appropriate anonymizing techniques, there are some very intriguing possibilities.

For example, by leveraging new machine learning or artificial intelligence algorithms, service providers should be able to aggregate data usage patterns to help determine everything from traffic patterns, to breaking news algorithms, program recommendation engines, and much more.

At a more basic level, who better to manage things like my contacts, or offer an intelligent, unified communications service that lets me see and manage all my various forms of communication, than the companies over whose network those messages travel?

Ironically, for those who are particularly privacy sensitive, the notion of paying for a highly secure, completely anonymized truly “dumb pipe” could also be an attractive option. While certain levels of privacy and security should be expected (nee demanded) from service providers, the notion of paying for extra security is something I believe most consumers will start to really appreciate.

More critically, there is a crying need to provide some kind of smart hub inside our homes so that we can easily see, connect and manage all the potential connected devices and services in our homes: from smartphones, PCs and tablets, to TVs, lights, HVAC controls and even smart cars. But instead of offering an intuitive, friendly device similar to something I wrote about a few weeks back (“Rethinking Smart Home Gateways”), service providers continue to offer non-descript black boxes whose very designs belie their archaic, impenetrable means of operation.[pullquote]The fundamental problem is that service providers act more like utilities than companies that offer services people are happy to pay for, such as Netflix. [/pullquote]

The fundamental problem is that service providers act more like utilities than companies that offer services people are happy to pay for, such as Netflix. There’s little sense of personalization or differentiation from service providers and the aforementioned router/gateway boxes they currently force into consumers’ homes are a classic example of that utility-style of thinking. Honestly, if your power company was to put a box into your home, do you think it would look much different?

In order to break this cycle, and avoid the risk of being cut straight out of people’s lives through various types of cord-cutting/replacement mechanisms, service providers need to start thinking very differently about the types of services they offer. They need to create, discover and deliver services that people actually value, and do so in a more personal, non-utility like way.

To their credit, a number of the major US telco and cable providers are making efforts to reach these goals, but they still primarily reflect a utility mindset. To break that means of thinking, they would be wise to look at how providers of services on the Internet—whether that be someone like Spotify, Uber, or Amazon—build and sell the kinds of services that consumers are more than happy to pay for. Only with that kind of out-of-the-box thinking can they truly move past their utility-driven focus and stop being little more than “dumb pipes.”

Questions for Google’s Big Event

Google is holding an event today where it’s expected to unveil several new products — a new phone, its Home speaker device, and possibly a new operating system named Andromeda. As usual, what’s leaked are specs, pictures, and some other details but none of the rationale or strategy behind these moves. As with Apple’s event last month, the most interesting questions are often the whys rather than the whats of the announcements. So here are some detailed questions I’m hoping to hear answers to at the event.

How is Pixel better than Nexus?

The single biggest question about the phone Google is expected to announce is how it’s going to be better than the Nexus phones that proceeded it. We’ve seen pictures that make the phone look like a close relative of the iPhone, at least from the front, but little of the hardware and software features that will set it apart. As I wrote ahead of last year’s Google event, the Nexus phones have never enjoyed significant sales or market share, even within the Android community. They’ve served a specific need – cheap, pure Android phones for developers and a handful of enthusiasts with similar interests – but have never gone further. How is Pixel different and, more importantly, how is it better?

How will Home get to know you?

One of Google’s big messages about Home when it was first teased at I/O was that it would get to know you over time. My question then as now was, what does that really means when most Home devices will be shared by multiple people in a household? Whether roommates, cohabiting couples, or families with kids, the default setting for Home will be homes with more than one individual, not all of whom will have a Google account (have you ever tried setting up a Google account for someone under 13?). How will Home distinguish between the people in a home and learn about each of them individually? Will it use voice recognition and will that voice recognition be able to effectively distinguish between siblings with very similar voices? Will it provide generic responses to most users and customized responses only to those with Google accounts? There are lots of questions with associated privacy implications if information is shared with the wrong people. This is something Echo doesn’t really deal with but people will expect Home to be good at.

How will Pixel and Home be distributed?

One big question relevant for both the hardware products we’re expecting to see announced relates to distribution. Nexus phones have rarely enjoyed broad carrier distribution and it appears Google has (predictably) struggled to get carrier distribution for the Pixel, too. Without distribution, the Pixel phone is likely dead in the water for that reason alone. Where will the Pixel phone be sold and will Google offer carrier-like installment plans through its own site (as it does for Nexus phones on the Google Fi service)? Amazon benefits enormously from being able to market the Echo through the largest e-commerce site in the world. What will Google do to make people aware of and want to buy Home? Where will it be sold? You can find the Echo at my local Home Depot but only if you know where to look. That doesn’t matter enormously to Amazon, but similar placement would be enormously problematic for Home and Google.

Why Andromeda? And when?

The third big thing we’re expecting to see at Google’s event is a new operating system, Andromeda, which allegedly combines elements of Android and ChromeOS. The big question is why Google is introducing a third operating system rather than simply consolidating around one of its existing operating systems (preferably the massive Android rather than the marginal ChromeOS). What benefits does Andromeda offer over ChromeOS on laptops? Closely related is the question of whether Andromeda will eventually become the “one OS to rule them all” and displace not just ChromeOS but Android. If that is indeed the case, the next big question is when – when will Andromeda be available and how long will it be before it replaces Android on smartphones and tablets? If it is to coexist, that will be confusing to users and OEMs alike, so Google needs to have clear messaging here.

What’s the role of Daydream hardware?

The last element of the event is likely to be a play around Daydream, Google’s VR platform, also announced at I/O earlier this year. What we’re expecting here is hardware from Google which will presumably be closely tied to the Pixel phone and potentially available as a bundle with that device. As I see it, the role of Daydream as a platform is to provide a route to VR hardware for Android vendors not named Samsung. But it’s less clear why Google needs to produce its own hardware here, especially if it’s also trying to encourage Android OEMs to develop their own. With both the Pixel phone and the Daydream hardware, the big question is, why does Google want to compete with OEMs? We’ve seen Microsoft walk a fine line here with the Surface line, prodding its OEM partners to do better while not alienating them, but can Google do it as well? We all remember the awkward quotes from Android OEMs about the Motorola acquisition, reeking of Stockholm Syndrome, but those were reflective of the balance of power between Google and the OEMs then. How will they react now, given the current dynamics?

Behavioral Debt

I’m fortunate to be a part of several circles here in Silicon Valley that get together frequently to discuss big ideas and engage in all kinds of technology related philosophical questions. I started sharing a concept with this group and was encouraged to flesh it out further. So I would like to introduce it to all of you for thoughts and feedback. Consider this one of those posts you need to have your thinking cap on. I’m calling this concept “Behavioral Debt” and it explains why a company’s customers don’t act or do the things they want them to.

The simplest way to understand this is with the popular saying, “You can’t teach an old dog new tricks.” I am attempting to put more understanding around this idea as it relates to the consumer tech landscape. I run into issues around behavioral debt regularly in my research on the consumer market. Companies want to know why their customers aren’t buying new products or services they offer while their old ones seem to be all their customers are interested in. In most cases, what we observe is simply entrenched behavior that is very difficult to evolve. Once a behavior is established, debt is built up around it. The longer that behavior remains entrenched, the larger the pile of behavioral debt. The larger the pile of behavioral debt, the more difficult it is for that customer to climb out from under it.

Let’s use a tangible example in Facebook. Facebook would like to move into a more transactions-based model for the buying and selling of goods on their platform. Here we may likely see the messy reality of behavioral debt rear its ugly head. Consumers have built up years of behavioral debt doing a few main things on Facebook. Consumers are likely content in this reality and, when they want to buy something, they go to Amazon or some other established online merchant. Facebook wants to offer them the chance to do this on Facebook so they don’t have to leave and spend time and money somewhere else. But “you can’t teach an old dog new tricks” and I have a feeling convincing consumers to do anything more than they do today will prove quite tricky for Facebook due to the many hours/years spent building up behavioral debt in how they use Facebook.

Similarly, Intel, Microsoft, and the PC makers would all like to sell more of the 2-in-1 PC concepts. These devices are not the cheapest machines on the market but they offer better margins. The problem is, 2-in-1 PCs sell at a fraction of the volume of notebooks. What Intel and Microsoft have not yet learned is there is a massive amount of behavioral debt built up around the PC form factor. People understand it, they are comfortable with it, and they have established workflows on it. Many of you have heard me say those who grew up with a PC have a bias for it. This bias is explained by behavioral debt.

This idea of behavioral debt also showed up recently in our mobile payments study. In markets like the UK, where consumers have been tapping to pay with their physical credit cards for years, many consumers (nearly 40%) said they have yet to embrace paying with their smartphones (which they acknowledge is safer and more convenient) because it is still easier to use their physical cards to tap and pay. Another 25% said they simply forget they can use their smartphone to pay instead of their credit card. This is a prime example of behavioral debt and showcases why changing an established and entrenched behavior is extremely difficult.

This is why we observe consumers in emerging markets or the Gen Z kids of today do things with their smartphones and tablets many of us can’t believe. We see them do things and think there is no way they can do them without a PC. The reason this “you can’t do real work on a tablet” phrase keeps erroneously showing up is because those who use it have a ton of behavioral debt around PC-based workflows. Those who do not have PC behavioral debt are free from those biases and are able to break what seems like new ground but is entirely natural to them.

This should also be recognized by startups trying to do something similar but better than what a popular service already does. We see startup after startup offering a feature, like a messaging app or a commerce store that proposes to be better than what hundreds of millions of people already use. More often than not, these fail because, when behavioral debt is built up, the person rarely wants better — they simply want familiar.

Getting customers to break free from behavioral debt is very hard. It also seems it is very rare given the case studies I’m finding and working through. In fact, it could be observed it is so rare when consumers fundamentally change a behavior that it should be considered quite profound. Doing so means an acknowledgment the new way is dramatically better and thus the behavior change is swift. This happens less frequently than we often believe in consumer markets. This one point, circling back to Facebook, is what makes something like Snapchat so interesting. Snapchat is on pace with Facebook in the number of videos played. The main difference being the vast majority of videos played on Facebook are not clicked on where in Snapchat they are. Facebook wants/needs videos to be successful but their users just want to get on Facebook, post a picture or share something, see some posts from others and move on. Getting video engagement has been a challenge for Facebook because of their users’ behavioral debt. But with Snapchat, video was the assumed experience from the beginning. Starting fresh means starting without behavioral debt. This is why, in my opinion, Facebook must continue down the road of acquiring a family of assets which encompass the needed consumer behavior. Buy Snapchat for video, Twitter for real-time news and global social communication, and whatever else springs up.

The Pull of Enterprise for Consumer Tech Companies

This week, Apple announced yet another enterprise partnership, this time with Deloitte. Next week, Facebook is expected to open its Facebook at Work product to more companies and with a tweaked business model. For Insiders, I wrote recently about Samsung’s push into the enterprise mobility space. It’s increasingly clear consumer technology companies don’t just want to stay in the consumer box. They’re quite keen to take a stake in enterprise, even as erstwhile leaders in the enterprise device space like BlackBerry decide to exit the market. It’s worth exploring why and what their prospects are.

The fastest-growing segment in mature smartphone markets

Less than a year ago, in reporting on Apple’s September 2015 quarter, Tim Cook said:

We estimate that enterprise markets accounted for about $25 billion in annual Apple revenue in the last 12 months, up 40% over the prior year and they represent a major growth vector for the future.

During that same timeframe, Apple’s total revenue was around $276 billion. So enterprise markets accounted for around 9% of all of Apple’s revenues. Bearing in mind several Apple categories aren’t even really applicable in the enterprise (including the iTunes store, iPods, Beats, and the like) the percentage of addressable categories is likely even higher. A year later, the 9% figure is probably in the teens, since Apple’s overall revenue growth was 28% that year and has been flat or negative since.

The reality is, in the context of mature and increasingly saturated consumer smartphone markets, the enterprise represents a large untapped market for smartphone sales. And, for a company like Apple, also for iPad, Mac, and even Apple TV sales, given its historically low penetration of the computer and conference room display markets. The challenge for Apple as first and foremost a consumer technology company is that it doesn’t have the skills or other assets to make complex sales in the enterprise, especially when those sales need to be tied to consulting, app development, and broader business transformation to be effective.

This, of course, is where Apple’s set of four major enterprise partnerships come in. IBM has developed over 100 apps for iOS, while Cisco networking optimizations are built into iOS 10 and SAP has created SDKs for iOS to allow software developers to tap into its software. The Deloitte relationship now takes this a step further and has a major consulting organization selling business process transformation capability closely tied to iOS devices. While Apple itself has focused on making it quicker and easier to deploy fleets of iPhones and iPads in the enterprise for generic productivity use, these partnerships take that a step further and focuses on the more customized, process- and industry-specific use cases many enterprises need to support.

As I mentioned, Apple is not the only company doing this – Samsung has made a big investment in enterprise mobility and security around its smartphones in order to facilitate a push into the enterprise and will be pushing even deeper as it seeks new growth opportunities while its consumer business grows more slowly.

Consumerization of IT the enabler

The enabler for all this is what’s often been called the consumerization of IT. There was an interesting transition that happened from the time I first started working in the 1990s through the early and late 2000s. When I first started working in an office, everything was better technologically there what I had at home – I experienced things like email, broadband internet, and more first in the workplace and only later at home or in college. But in the mid- to-late 2000s, things began to change. I experienced technology like the new breed of smartphones, mobile apps, web apps, and social networking first in a personal context. These things tended to arrive at work only as people like me brought them there and tried to make them fit in corporate settings. There had been earlier examples – executives bringing personal laptops to work being an obvious one – but those had remained the exceptions rather than the rule.

The iPhone was arguably at the center of all this, as one of the things that pioneered the bring your own device (BYOD) model in the workplace, but also as an enabler of many of the apps that would make the same journey later. All of this fomented a sea change in purchasing patterns around technology, in which the buyers went from being heads of IT departments to being end users. The same transition later enabled apps like Dropbox, Yammer, Google Docs, and Evernote to take hold in the enterprise in a very different way from the apps they often displaced – starting with individual end users and only later catching the attention of IT departments.

Consumerization of IT goes back the other way

The interesting thing about Facebook at Work is that it’s both offensive – in the way Apple’s and Samsung’s forays into the enterprise have been – and defensive at the same time. Whereas saturating smartphone markets are the driver for Apple and Samsung in the enterprise, it’s a saturation of leisure time that’s driving Facebook to look into people’s working hours as a possible new addressable market. But it’s also a threat from a new breed of enterprise applications threatening to come back into the consumer market that’s driving Facebook from a more defensive posture.

Slack is one of the latest and most successful of a breed of enterprise apps that has piggybacked on the consumerization of IT trend, borrowing heavily from consumer app design and user interfaces, and enabling individual employees to sign up without any sort of corporate blessing. End users love it but, as a result, many are using Slack in settings for which it was never intended – planning dinner with their spouses, discussing weekend plans with friends, or managing PTA meetings with fellow parents and teachers. After roughly a decade of tech going from the consumer world into the enterprise, we’re now seeing some of this technology going back the other way and that’s the threat to Facebook. Having failed to capture people’s attention in the enterprise proactively, Facebook now finds itself having to do so reactively as a bulwark against possible encroachments from enterprise apps.

We’re going to see more of all these trends – the increasing consumerization of IT within the enterprise, both from a devices and software perspective, the bleeding back into the consumer world of certain functionality, and the efforts from consumer technology companies to push into the business world as both an offensive and defensive strategy. That requires a major strategic and cultural shift on the part of many of these companies and they’ll inevitably need help from established players in those markets along the way. Facebook seems to think it can go it alone. It will be fascinating to see if it can succeed.

Re-Evaluating the Wearable Market

Here we are, a few years into the market for wrist-based wearable technology and I thought it would be helpful to check in on what we are seeing.

Unquestionably, the market for smart (like a smartwatch) and basic (like a Fitbit Charge HR) remains one solely focused on health and fitness enthusiasts. The problem with the market today is it simply is not very large compared to other categories. Based on a health study we did, where we were able to segment consumers around some health and fitness related themes, we concluded the total addressable market for health and fitness tech is about ~18% of consumers who are a target profile. In absolute terms, if 18% of consumers in more mature markets are the only targets for these products, we are talking about a market size of only around 200-250 million. Not bad, but not huge. In fact, I’ve reconciled the market for these type of wearables, including smartwatches, and it may not be as large as I thought initially. It’s one of several scenarios on market size I have gamed out.

When we look at wearables at large and try to make educated guesses or forecasts of where the market may go, we include things like smart clothing, ear-based smart tech, enterprise-focused smart tech, and a range of other smart tech which may sit on our person. Overall, depending on how the category gets further defined, it could seem large if we just look at the top line forecast. However, it will be a highly segmented market by type of wearable.

Right now, the market is dominated by wrist-based fitness and smartwatches. I’ve been maintaining a model on this market for a while and recently updated it with how I think the Fall and Holiday quarters will go. Below is my model of wrist-based smart tech by vendor.

screen-shot-2016-09-28-at-8-52-12-pm

For those of you who keep a close eye on the market, you will note our estimates agree with the sentiment that Fitbit still leads the category. We do not believe this will always be the case but it is today.

Apple and Fitbit do most of the volume on a per quarter basis with Apple leading the category in profit and ASP. Xiaomi has been hanging in there, mostly in China, but I still maintain that, at sub $20, it is surprising they are not selling more than they do. At that price, I’d actually consider ~3m a quarter to not be successful. This is either a criticism of the category in China or their brand. I’m not sure which one.

We see a strong holiday for Fitbit and Apple on the back of the new product lineups and aggressive promotional pricing by retailers. The health and fitness angle Apple and Fitbit are focusing on still leaves head room to grow in this space. But I emphasize, if we can’t break out broader consumer use cases, this market will not be much larger than it is now.

With smartwatches, which we think are general purpose wearable computing devices, we see much more potential than basic fitness devices. Our upside forecast of the market depends on consumers embracing the value of fitness and health as the entry point and discovering value beyond health and fitness thanks to an ecosystem which can develop once the installed base is larger. I’ll spare you the host of assumptions we are making and just show you what we believe a reasonable and educated forward-looking forecast looks like.

screen-shot-2016-09-28-at-9-17-25-pm

Mind you, this includes a range of other wearable tech products, not just fitness bracelets and smart watches. But looking at the growth trend thus far and where we believe the market and vendors are heading, this is our best guess of the next few years for wearables at large.

A major key to this market’s growth is to simply get consumers to have a first experience with the product. As we examine behavior and satisfaction once a consumer tries a wearable product, we are encouraged by what we see. Enough to maintain our conviction there is something here.

It is crucial to get beyond the less than 20% of the market of fitness and health products and that smartwatches in particular start to develop an ecosystem of apps which can extend the use cases well beyond health and fitness. We are optimistic, with Apple Watch in particular, that the upgrade in hardware features and performance is the catalyst that gets more developers excited about watchOS and to start building more apps which expand the value. This will remain the largest point of focus for us over the next year as we wait and see if smartwatches, and the Apple Watch in particular, can go mainstream.

STEM: The Next Great Equalizer?

I grew up in the age of Sputnik and the race for the moon. Like most of the youth of my generation, we were challenged in school to “beat the Russians” to space and were driven by President Kennedy’s promise to have a man on the moon by the end of the decade. That speech was made on Sept 12, 1962.

As I reread that speech, I was struck by how much of it focused on the role technology played in history and President Kennedy’s vision of how it could impact our future. In that speech, given at Rice University, he said:

“To be sure, we are behind, and will be behind for some time in manned flight. But we do not intend to stay behind, and in this decade, we shall make up and move ahead.

The growth of our science and education will be enriched by new knowledge of our universe and environment, by new techniques of learning and mapping and observation, by new tools and computers for industry, medicine, the home as well as the school.”

The speech became the rallying cry for my generation and it produced tens of thousands of students who took this challenge seriously. This gave us the engineers, scientists, mathematicians and educators who not only delivered on the promise of putting a man on the moon, but also helped create the core technology to enable computing, the internet, advanced communications and modern healthcare that we have today.

In essence, this challenge to conquer space and make technology a key part of our world in the 60s and 70s became one of the great equalizers. It created the next generation of workers who studied math, science, technology and engineering, which helped drive economic growth and set the stage for the technological breakthroughs of the latter part of the last century.

But educators tell me that, by the mid-1980s, without a similar strong push by either the US government or schools to emphasize math, science, engineering and technology, we lost almost two decades of youth who chose to go into other fields of learning. During that period, kids studying these tech disciplines decreased.

It was not until the tech boom of the late 1990s we started seeing an increase of students getting degrees in math, science, and engineering. They are the ones who are now driving our current technology revolution in social media, AR/VR, advanced computing and communications and all of the technology that impacts the fabric of our lives today.

In a recent piece in Tech.pinions, I wrote about seven areas of explosive growth in tech that will drive our world and economy in the next 10-15 years.

In the article, I stated that, for us to achieve this type of growth, we are going to need millions of new workers skilled in science, technology, math, and engineering. At the moment, we just don’t have enough of these skilled tech workers to drive this explosive growth and make the vision of a connected world a reality. In fact, when I talk to the big companies like Boeing, Intel, Qualcomm, etc., they all fear that, as they grow their businesses and tech demands increase, they will not have enough tech educated staff to meet their engineering needs.

The good news is there is a very strong movement going on focused on STEM education that, like the space race of the past, has the potential to become the great equalizer of this next generation of workers. Demand for people who are skilled in science, technology, engineering, and math will only increase and these workers will be at the heart of the next major breakthroughs in technology for the first half of this century. According to Adeco, there will be 2.4 million STEM-based job vacancies in 2018 alone.

This push in STEM is partially led by the Maker Movement which the White House and most state leaders are getting behind, along with an increased push for STEM in all grade levels helped by major contributions by Chevron, who has created STEM labs in over a dozen schools, and other big companies like Boeing, Intel, ATT, and others. Even the San Francisco 49ers have pushed STEM through STEM Leadership Institute Program.

A good primer in the Maker Movement and its importance comes from a new book by Dale Dougherty, known as the father of the Maker Faire, and Tim O’Reilly, titled, “Free to Make: How the Maker Movement is Changing Our Schools, Our Jobs, and Our Minds”.

In it, Mr. Dougherty echoes some of the concerns I mention above and says, “‘Free to Make” asks us to imagine a world where making is an everyday occurrence in our schools, workplaces, and local communities, grounding us in the physical world and empowering us to solve the challenges we face.”

Across the nation, special STEM programs are emerging inside our school systems as well as through privately funded programs springing up in communities around America. Business and educational leaders and some major politicians in the US know technology will fuel our future and understand preparing the youth of today for the jobs of tomorrow is now emerging as a major priority.

HERE: A Clear Case of Together We Are Stronger

Ahead of this week’s Paris Motor Show, HERE announced how its Open Location Platform (OLP) aims to gather real-time data from sensors on board of connected vehicles to create a live assessment of road environment that will make driving more secure for drivers as well as driverless cars. In order to achieve this goal, HERE will start sourcing sensor data from Audi, BMW and Mercedez-Bens. More brands will be added over time.

The data provided anonymously by the car makers will be the basis of four distinct services:

HERE Real-Time Traffic

HERE Hazard Warnings

HERE Road Signs

HERE On Street Parking

These services will be made available to any auto maker, municipality, road authority, smartphone maker and developer to license. While connected cars today are still limited, HERE is expecting other auto makers to join the current lineup and contribute their car data.

Openness in data sharing, which is a first in the automotive market, clearly shows how much is at stake for car vendors.

HERE Focuses on the Bigger Picture

HERE has come a long way since launching as the separate brand mapping service of phone maker Nokia. From a pure mapping service for consumers, it built a strong enterprise business and white-labeled its maps for big names such as Amazon, Facebook, and Baidu. In 2015, HERE was bought by a consortium of German car manufacturers that included Volkswagen (Audi’s parent company), Daimler (Mercedes-Benz parent company), and BMW. Since then, HERE has been moving fast in closing more enterprise deals and expanding the consumer offering with HereWeGo which really takes the brand from offering a mapping service to offering a transportation concierge service.

HERE’s mapping apps on iOS and Android have very positive reviews but uptake remains limited, especially in markets like the US. This is not necessarily because of the superiority of Google or Apple maps but because consumers tend to use what comes as a default or is linked to a popular brand. These entrenched behaviors keep Google Maps as king of a castle. Trying to change that would require a lot of effort and marketing dollars, with a return on investment that would likely not be significant enough for HERE.

The recent announcement from HERE makes a lot of sense when it comes to how the OLP could become a more critical consumer enabler, albeit an invisible one. We spend so much time looking at Google and Apple, we sometimes forget consumer engagement is not just valuable when there is a direct return on your own brand. In other words, HERE cannot be successful outside the enterprise only by becoming a strong consumer brand.

What is at Stake?

The data that cars will be able to collect will be key more than the services both as a source of revenue and an engagement point with users. Artificial intelligence in the car will benefit tremendously from all this information and so will our personal assistants.

Google has been at this for years when it comes to collecting data for its map service, both from a world layout perspective and user preference and habits, including catching Pokémons. Smartphones have democratized navigation in the car, preventing the attach rate of built-in navigation from really taking off. While the integration of navigation has been trickling down from luxury models to more basic ones, the premium consumers still have to pay for navigation and cost of keeping the maps updated are still negatively impacting in-built navigation.

Semi-autonomous and autonomous cars might change the rule of engagement with consumers who are starting to rely more on what comes built into the car and, over time, using their phones as a secondary screen. If you see maps and the data as the intelligence that powers your smart assistant, you can see how the model that HERE is setting up offers big upsides.

Of course, Google and Apple will try and own that whole experience from maps to virtual assistants, but car vendors have the opportunity to offer an alternative by integrating HERE services out of the box — or, in this case, out of the garage — and let the virtual assistant the user prefers (Siri, Cortana, Alexa, Google assistant) tap into the data and be my interface.

There will also be consumers who do not buy into the virtual assistant scenario but would love to have a chauffeur in the sense of a curated, safe driving experience. This is another role car vendors might want to play. The key to success, however, is not to make these features premium and static. Innovation in the mobile world happens faster and would always offer consumers more for less.

Of course, as HERE does not just make this data available to car vendors, the opportunities to be the data engine for players wanting a piece of the pie of the connected car will be endless.

Social Apps Timeline and Snap’s Spectacles

I’ve often noted how many social apps seem to follow a similar thread or timeline as they build out their feature set and evolve their products. The same elements often show up in a similar order, making the evolution of such apps and services fairly predictable over time. But since many major apps have now been through most of that standard timeline, things are starting to get more interesting. Snapchat’s rebranding of itself and launch of its Spectacles product is a great example of that.

Social Apps Timeline

The diagram below shows what I consider to be the standard evolution most social apps go through over time, with increasingly rich forms of content being enabled on the app or service as users and their usage begins to mature. Not all apps start at the top left but, wherever apps start, they typically move rightward through these same steps in the same order. My original version of this timeline had just the first three steps and third party content but, over the last couple of years, the fourth step along the main timeline has become more interesting at the video stage:

social-apps-timeline

Consider the major apps that have gone through this transition:

  • Facebook – Mark Zuckerberg has said on earnings calls and at Mobile World Congress this year, “Most of the content 10 years ago was text, and then photos, and now it’s quickly becoming videos…” Not only has Facebook evolved in terms of the kind of content people are able to share but the balance of content actually being shared has quickly moved in that direction too, which helps to explain what makes this transition compelling – the increasing richness of the content makes it more desirable and creates greater (and longer) engagement. More recently, Facebook has invested heavily in the creation and consumption of 360° and live videos too
  • Instagram – Instagram was defined in its early years by its laser focus on photo sharing (and the square format) but has since evolved several times as it added first very short videos and then lifted the length limit on videos and enabled formats other than square for both photos and videos
  • Snapchat – as with Instagram, Snapchat began with a focus on photos, not text, and evolved from there. It’s added richness to both photos and videos over time, with drawing and writing on pictures and Lenses for video most recently
  • Twitter – as a service began with a heavy link to text messaging, Twitter was, by default, focused on verbal, rather than visual, content from the get-go but, over time, it too has evolved into both photography and video, with Moments combining the two media in interesting ways, and Periscope, Vine, and its various live video partnerships focused on video

As well as the direct social sharing aspects around these increasingly rich forms of content, these services have also increasingly enabled third party content from brands and content owners as well. Facebook Pages, corporate Instagram and Twitter accounts, and the Discover tab and branded Snapchat handles have all introduced content which comes from sources other than users’ friends. The nature and composition of that content increasingly mirrors the same evolution of formats as we see with user-generated content.

A break in the timeline

So far, so good. The timeline mostly works for these major social apps. But what we’ve seen over the past year or two – and again this past weekend – is a departure from the timeline in that these companies are starting to break out of their apps and push into hardware, at least in the case of Facebook and Snapchat. Facebook was first to go, with a focus on a particular form of immersive video (VR) and its Oculus acquisition. This purchase was a sort of double bet on even more immersive video on the one hand and a candidate for the next generation of computing platforms after smartphones on the other. Snapchat’s management, meanwhile, began positioning it as a camera company in April of this year. Snap Inc. has now fully embraced that concept as its main identity officially as it launches Spectacles, its own first foray out of the app world and into hardware.

All of this raises questions about whether hardware becomes an inevitable next step in the timeline once companies get to the latter stages of the content evolution. Apple and other companies have famously embraced the maxim that, if you’re really serious about software, you need to build your own hardware. But is the same true for being serious about content sharing too? If so, which is more important to own – capture or consumption hardware? Facebook and Snapchat have so far made different bets on this question but I wouldn’t be surprised if we eventually see them end up in much the same place. What about Instagram, Twitter, and other social apps? Do they also need to make similar acquisitions? Instagram can obviously benefit from whatever investments Facebook makes in this area but is there something to be said for branding the hardware and making it exclusive? That’s another bet Snap is making which Facebook is not – though Facebook and Oculus will own the interface, they won’t own the content.

For all these reasons and more, the next phase of the development of social apps is going to be even more interesting than what’s come up to now. We’ve seen how easy it can be to replicate (at least on paper) competitors’ features – Facebook cloned Snapchat with its Poke app (which almost no one remembers) but has also more recently cloned Snapchat’s Stories features in Instagram (apparently with more success, if my feed is anything to go by). Really good hardware, though, is tough to copy in the same way. So this could open up an interesting new competitive dynamic between these companies. That should make it really fun to watch how all this plays out over the next few years.

Mobile Payments: The Future is Here, just not Evenly Distributed

If you have made a payment at retail with your smartphone and are anything like me, you’ll feel this is the future of payments. But as the famous quote from William Gibson says, “the future is here. It is just not evenly distributed.” After conducting some research in the US, UK, and Australia, it would be hard to find a more appropriate phrase for mobile contactless payments.

Last fall, the United States went through a drastic disturbance in consumer retail stores thanks to the EMV shift, which moved us from swiping our credit cards to inserting them into a terminal and waiting for the transaction to complete. With the average transaction time still taking between 5-10 seconds, down from 15 seconds six to eight months ago, US consumers have had friction added to their checkout process. It is with this retail experience in mind we were hopeful, last fall, that mobile contactless payments would take off. Toward the end of 2015, roughly 17% of iPhone owners had used Apple Pay, and 7% of Android owners had used Android Pay. Part of this had to do with less than 50% of the iPhone installed base in these markets having devices that are Apple Pay capable. An even smaller number of Android-based devices in use are NFC capable. Here we are a year later, with exponentially more smartphones in the market NFC capable, and interestingly, not a lot has changed.

When it comes to tap to pay terminals, the US is well behind markets like the UK and Australia. While we are still in early days with consumers paying with their smartphone in those markets as well, a majority of consumers there are already using tap to pay on a regular basis using their bank-issued card with an NFC chip in it. We decided it would be interesting to study consumers in the UK, Australia, and the US in order to see the contrast between mature contactless (tap to pay) payment markets and one like the US where it is all brand new.

We asked consumers in the US, UK, and Australia if they have ever used a form of contactless payment, defined as tapping to pay with your bank-issued card or mobile phone.

screen-shot-2016-09-24-at-8-44-50-am

As you can see when it comes to contactless tapping to pay behaviors, markets like the UK and Australia, with bank-issued cards that have tap to pay functionality and the vast majority of merchants accepting tap to pay, it paints a very different picture than the US market. Where ~80% of consumers in the UK and Australia have used a tap to pay method, 80% of consumers in the US had not. Part of this has to do with minimal acceptance of contactless methods at US retail, compared to many merchants accepting it in the UK and Australia.

To further highlight the stark differences of the US market compared to the UK, and Australia, where a form of contactless payment is a normal transaction behavior, 61% of US consumers said they are not that familiar or not familiar at all with any kind of contactless payment method. One solid conclusion from our research is we still have a lot of educating to do on the US market.

Room to Grow for Mobile Payments

After studying all three markets, what I found most interesting was first, the disparity between consumers using contactless in the UK and Australia and those not using it in the US as outlined above. The second thing that stood out was how all three markets were remarkably similar when it came to usage of mobile contactless payments. Meaning, using something like Apple Pay, Android Pay, or Samsung Pay.

screen-shot-2016-09-24-at-8-57-44-am

The chart shows the types of contactless transactions consumers have tried in all three markets. Interestingly, while tapping to pay with your credit/debit card is an established behavior in the UK and Australia (over 50% of the market uses this method on a weekly basis), consumers in those markets have yet to fully transition their contactless payment behaviors from their credit/debit card to their smartphone, even though it is accepted almost universally in their country.

When it came to which mobile contactless payment was most popular among those who said they have used their mobile phone to tap and pay, Apple Pay is the most common form of mobile payment with 62% usage share of mobile contactless methods compared to less than 30% for Android Pay and Samsung Pay respectively.

While we are still new to paying for goods and services with our smartphones, the future seems bright. Our research found consumers who have used Apple Pay, Android Pay, and Samsung Pay had high satisfaction levels with the experience, with speed and convenience the biggest factors in their satisfaction, and a high propensity to use it more often in the future.

Security Still the Largest Barrier for Non-Users

The sleeper story for consumers is security. While this happens to be one of the single most important reasons to adopt contactless payments, it is also the one least understood by consumers. In all three markets, 40% of consumers listed security concerns of adding their credit/debit card to their smartphone as the main reason they have yet to try it, while 29% said not trusting the transaction was secure as their main reason.

In an era of heightened awareness of identity fraud, merchant breaches of credit card data, and more, it is not surprising security concerns came up time and time again in our study. Yet, a data point that stood out was 45% of consumers stated an increase in willingness to use mobile contactless payments if retailers and banks helped them understand the security benefits of using something like Apple Pay, Android Pay, or Samsung Pay. This was listed as the single biggest thing retailers and banks could do to get them to use mobile contactless payments.

As I analyzed the data of over 50 questions between all three markets and the responses of 1,761 consumers, I’m convinced as ever mobile payments are the future. As more banks support it, merchants accept it, and consumers understand the security benefits, I’m convinced we will get to an era where paying with our smartphones is the normal and most common behavior. However, our research strongly suggests it is not consumers standing in the way of adoption. It is retailers and banks who need to make the appropriate moves to bring this safer and more secure way to pay to their customers.

I’ll be presenting the full findings of our research at a VIP event hosted by NXP in Las Vegas on October 24th. If you are coming to Money 20/20 or are a VIP in the banking and transaction industry, or media, let me know if you would like to attend.

Snap Inc: The Spectacle of Spectacles

Late Friday night, media drama ensued. Business Insider broke the story of Snapchat’s (now Snap Inc.) not so secret hardware product. It became clear moments later that the Wall St. Journal had been given the exclusive to announce the new product, and Snapchat’s new company name, to the world. Publishing this late Friday night was most certainly not the plan and was prompted by the leak from Business Insider. The Wall St. Journal published a detailed look at Snap’s new product called Spectacles as well as an interview with CEO Evan Spiegel on why he believed in this product and Snap’s future as a camera company.

Many will compare this to Google Glass. I encourage you to resist the urge. While a healthy bit of skepticism is warranted, these are quite different from Google Glass and, if anything, more similar to GoPro. Spectacles, the name of Snap’s glasses, will also cost $130 instead of the more than $1000 Google Glass cost. Most importantly, in the eyes of their target demographic, Snapchat is much cooler than Google.

Reading the WSJ article, it was this bit from Spiegel’s experience that stood out to me on using the protoype glasses:

He remembers testing a prototype in early 2015 while hiking with his fiancée, supermodel Miranda Kerr. “It was our first vacation, and we went to Big Sur for a day or two. We were walking through the woods, stepping over logs, looking up at the beautiful trees. And when I got the footage back and watched it, I could see my own memory, through my own eyes—it was unbelievable. It’s one thing to see images of an experience you had, but it’s another thing to have an experience of the experience. It was the closest I’d ever come to feeling like I was there again.”

Anyone who has used a GoPro understand the value of this statement. Remember, I’m the guy who wore a GoPro on his head when his kids were little to capture these memories from a first person perspective.

img_0759-1024x768

Lot’s of kids have GoPros, especially active ones. They use it to record themselves skateboarding, bike riding, hiking, skiing/snowbarding, swimming, etc. We may as well rename this generation “The Capture Generation”. That is the demographic Snap Inc. understands. These glasses are born out of millenial and Gen Z behavior with their devices and the urge to capture and share as many experiences as possible.

The first generation of any product is hard to make definitive claims regarding its future. You need to be very long on Snap Inc. in order to buy their story about these glasses. That being said, the concept is sound and having capture devices on our person, on our eyes, makes sense at least some day in the future. Even though a low-barrier to entry the Chinese manufactuing ecosystem creates makes it easy for anyone to be in hardware, hardware will remain hard to do. Just because anyone can, doesn’t mean they should. If Snap Inc. can get the best hardware engineers out there (aside from the ones they have plucked from GoPro) then we can take them more seriously.

On an end note, one thing I found interesting was where the camera is are located. If you notice, they are located about eye width apart.

screen-shot-2016-09-25-at-5-31-02-pm

Having two cameras is a key factor in capturing 3D and VR content. Perhaps this is a broader signal of where Snap intends to go.

Are Cable’s Wireless Ambitions Viable?

The news has been aflutter this week with announcements by both Comcast and Charter that they plan to enter the wireless business as Mobile Virtual Network Operators (MVNOs). For the wireless historians among us, this is cable’s third run at the wireless business over a 20-year span.

The question is, do Comcast and Charter have a chance in an already competitive and saturated wireless market? Well-respected equity research analysts at New Street Research have looked at this in depth and concluded cable could capture some 10% or more of the wireless market. I see some important hurdles and am a little less optimistic.

The idea of a Wi-Fi centric MVNO is to offer a less expensive plan by offloading a significant amount of traffic from the cellular network. Voice, texts, and data default to Wi-Fi, with cellular acting as a ‘backup’. Cable’s particular advantage is the millions of private and public Wi-Fi hotspots they have deployed over the past several years as part of the “Cable Wi-Fi” initiative. In Comcast’s case, there are some 15 million hotspots, including residential access points broadcasting a second, ‘public’ SSID. They have also deployed a core network to support residential and business fixed line telephony service. Their ‘Wi-First’ service, as I call it, would theoretically deliver attractive margins for their wireless business. Adding wireless as part of the bundle that includes broadband, pay TV, and even fixed line phones, makes for a compelling value proposition, in their view.

Now, this is not the first attempt at a Wi-Fi centric MVNO. Republic Wireless and Google’s Project Fi are the two leaders in the United States, with one million or so subscribers between them, according to our estimates. There have also been some failures, notably Scratch Wireless and Cablevision’s Wi-Fi only Freewheel service.

There has been substantial progress in the Wi-First experience over the past couple of years. Republic Wireless has developed a lot of intellectual property around the idea of “Adaptive Coverage”, which dynamically and seamlessly switches between cellular and Wi-Fi, even on the same call, always searching for the optimal signal. Google’s principal and important contribution with Project Fi is the ability to dynamically choose the best connection between Sprint and T-Mobile (its MVNO partners), in addition to Wi-Fi. Work done by the cellular operators on Wi-Fi Calling and VoLTE have also made Wi-Fi more viable.

Cable’s prospects in wireless come down to five fundamental questions, in my view.

1. Progress on Usability

Cable has its work cut out for it if they plan on leveraging Wi-Fi hotspots into a quality wireless service. The “Cable Wi-Fi” experience, which leverages indoor and outdoor hotspots to provide subscribers with Wi-Fi coverage outside the home, has been fraught with usability challenges. My personal experience (and that of many others I’ve talked to) with Xfinity Wi-Fi is that the phone automatically attaches to any Xfinity hotspot it sees, even if the signal is weak or the AP is functioning poorly, often resulting in a service that simply doesn’t work on cellular or Wi-Fi, with the added insult of draining the battery. Heard of ‘airplane mode’? I call this ‘purgatory mode’. In fact, many customers say they’re forced to turn off Wi-Fi when outside the home so their phone doesn’t get stuck on a cable Wi-Fi access point.

I wrote about this issue two years ago and, unfortunately, I have seen little progress. Since this is a feature the cable companies provide for free to their broadband subscribers, it isn’t mission-critical. But if they’re going to start charging $30 or more a month for an add-on wireless service, significant improvements need to be made on usability.

2. iPhone Must Be Part of the Offer

Wi-First services have historically been available on a limited number of purpose-built Android devices. This has started to change as a result of the evolution of the Android OS. Most newer Android phones can now support Wi-First services without a lot of custom configuration, which has led to an expansion of device choices from companies such as Republic Wireless.

The elephant in the room, however, is iPhone. Historically, Apple has chosen not to support Wi-First services and does not provide the equivalent configuration tools as the Android crowd. I believe any cable company foray into wireless must support iPhone. It might not be necessary for a niche player such as Republic but, if you’re the cable guy, you’ve gotta support the device that commands nearly 50% of the U.S. smartphone market. The good news is Apple does not have to do anything particularly special to support, say, a Comcast wireless service. This is not a Brian Roberts-Tim Cook level discussion. However, there are lots of things Apple could do to make the service work better, from a usability and subscriber experience perspective, on the scale of what Apple and T-Mobile have done to make Wi-Fi calling a good experience. Additionally, I’d imagine that, since cable has little retail presence, they might rely on Apple for distribution to a certain extent.

3. Is the Business Model Viable?

Do not expect a deeply discounted wireless service from your cable company. Much as subscribers might like to complain about the cost of their wireless service, per-gigabyte data prices have come down markedly over the past three years. $40-50 basically gets you unlimited voice & text, plus a substantial chunk of data. And $35-40 gets you a pretty competitive pre-paid plan. It’s hard to see cable undercutting that substantially. Yes, their goal is to have some 75% of traffic on Wi-Fi, which would provide for a less expensive offering. But even though they have an MVNO deal with Verizon, that doesn’t necessarily mean they’re getting fantastic wholesale pricing. So any substantial data usage on the cellular network will come at a premium.

Roaming is another issue. Remember that cable companies, despite their size, are still regional players. Does Comcast want to support heavy out of market use on cellular?

Another pro-cable argument is that, even if wireless is a break-even proposition or a loss leader, it’s part of a larger bundle, which contributes to long-term subscriber value, especially in an era of declining margins for pay TV. I get that. The question is whether wireless is ‘worth the trouble’ as a bundle add-on if it’s not successful as a standalone business.

4. Why Would A Subscriber Switch to Cable?

I think it is going to be tougher for the cable companies to get subscribers to switch than they think. Postpaid wireless industry churn is already in the 1% range, in a competitive four carrier market. We’ve already established that cable wireless services aren’t likely to be deeply discounted. Also under-recognized is the fact that some 50% of U.S. wireless subscribers are on some sort of family or shared usage plan, many of whom are also enmeshed in some form of equipment installment plan, with upgrade opportunities and aggressive promotional offers around iconic device launches. This all makes it tougher to get customers to switch to a company that, let’s face it, they don’t exactly love.

Content could be another form of differentiation. One could certainly see a situation where a cable subscriber can get TV content extended to mobile devices, as part of their service plan. Comcast especially, with its NBC Universal and other content assets, could put some pretty compelling packages together.

The fact is, however, that a lot of this already exists in the industry. Comcast subscribers can already get a fair chunk of content on mobile, through the excellent Xfinity app. They can even download DVR content for offline viewing. AT&T is already being aggressive in this domain, offering unlimited data plans to subscribers who have both AT&T and DirecTV, and zero-rating DirecTV content for AT&T subscribers. Cable companies will have a hard time competing with zero rating or unlimited data plans on cellular, since they don’t own their own network. So it’s hard to see a truly unique value proposition related to content as part of an MVNO.

One opportunity in cable’s court is value-added services. Their IMS core and role in fixed telephony provides the basis for some potentially compelling rich communications services involving wireless. I could see small business being a target here.

5. What About Successes in Europe?

There have been several successful Wi-First forays by cable companies in Europe. I’d argue it’s different here. First, most of Europe’s cities have a high population density, making them easier to cover with Wi-Fi. Second, because of the SIM-centric culture there, switching between providers with the same phone is more fluid. Third, wireless services are more expensive, which improves the Wi-First value proposition. And fourth, we consume a lot more data on cellular, which affects the economics.

A final wildcard is whether the cable entry into wireless is predicated upon eventually having their own network of some sort. Comcast is participating in the 600 MHz auction, so there’s that. There’s a lot of activity in the unlicensed band (LTE-U, MulteFire, etc.), which will result in more seamless cellular/Wi-Fi services over time. There’s also a chance cablecos could be involved in some sort of consolidation involving Sprint and T-Mobile, as has long been speculated.

I am more optimistic cable could play a role with at least a partial facilities-based network. They would have more control over pricing, handset relationships, and distribution. Their other assets would be a greater value-add. So, perhaps a limited scale MVNO is an important first step.

The Power of “Good Enough”

Tech reviews and broad tech industry media coverage are often about the cutting edge of technology and, as a result, can be very critical of anything seen as less than stellar. But the reality is many ordinary people regularly use technology that could be much more accurately described as “good enough” rather than bleeding edge. The vast majority of us aren’t using the latest and greatest technology, not least because that often costs more than we’re willing (or able) to spend and yet we do just fine. This creates an odd disconnect between how real people use technology and how the experts talk about that same technology.

EarPods and Defaults

Every iPhone ever shipped has come with a pair of Apple-provided earbuds in the box, just as iPods did before them. These earbuds have never been at the forefront of headphone technology – they’re small, relatively cheap to manufacture, and make no claim to be anything more than they are. But Apple nevertheless made them part of its early ad campaigns for the iPod, and they became a fashion statement of sorts. In a recent survey our Tech.pinions editor Ben Bajarin conducted, over half of those surveyed said they used the headphones that came in the box.

The fact is, defaults are powerful. Many people use those defaults, especially when they’re good enough. That’s not to say there aren’t better options out there for audiophiles or those who want noise canceling or over-ear options, but it is to say that, for many people, the basic option is just fine and they’ll never look beyond it. This is obviously important in the context of the removal of the 3.5mm headphone jack on the new iPhone 7. Apple is banking on the fact the majority of people who buy one of these new phones will use the new Lightning-based EarPods just as they have always used their 3.5mm predecessors. Those who don’t will use the free adapter with their existing headphones or start or continue using wireless options.

Deciding Where Good Enough is Enough

It’s notable, however, that Apple chose not to ship Bluetooth earbuds in the box, even though its vision for the future is a wireless one. Why is this? I think there are two reasons. First, as a practical financial matter, “good enough” in a Bluetooth headset costs significantly more than in wired earbuds and Apple didn’t want to either raise the price or lower the margins on new iPhones to accommodate that increased cost.

But I think the other reason is there is a dividing line between products that can afford to be simply good enough and those that can’t. Apple wants to evangelize wireless technology and you don’t sell a vision based on “good enough” products. You make the very best to sell the story and then, over time, you supply options which are good enough to meet needs further down market. When the perception of a product affects the perception of your brand, you can’t just do “good enough” (unless that’s the brand identity you’re going for, as with Amazon’s Basics line of electronics). Hence, Apple’s very different focus with its AirPods, which are on par with Apple’s hero products in terms of the positioning, marketing and – yes – pricing. This marks a departure for the Apple brand in the headphone space, although, of course, the acquisition of Beats brought higher-end headphones into the company under a separate brand. That, in turn, signifies something about the broader significance Apple expects the AirPods to take on over time, something others have written about here and elsewhere, and which I’ll likely tackle separately soon.

The Challenge of Premium

One of the biggest challenges for consumer electronics brands is targeting the premium segment while also serving lower segments of the market. One of Apple’s strengths is it has never really strayed from its premium positioning even as it has brought several of its major product lines down in price over time. Conversely, other smartphone vendors looking to target the high end have also served the mid-market and have struggled to associate their brands with premium positioning. This becomes particularly challenging when the same brands put out “good enough” and premium products in the same product category, like smartphones.

Part of Apple’s genius has been carefully separating the categories where it provides premium products from those where it participates at a good enough level and not allowing the two to mix or converge. The fact Motorola and Samsung produce both high-end flagships and very cheap low-end smartphones doesn’t help their attempts to compete with Apple for the premium customer and Motorola has arguably largely abandoned the very high end in the last year or two. In the car market, this problem is solved with sub-brands (think Lexus versus Toyota, or Cadillac versus Chevy), but we haven’t yet seen that approach play out in the consumer technology market in the same way.

Disruption Theory and Jobs to be Done

Clayton Christensen’s Disruption Theory comes into play here too – when companies insist on providing only a premium version of certain products, they risk low-end disruption from competitors catering to the needs of those who feel over-served by the current options. However, despite repeated predictions that the premium smartphone market would eventually be disrupted in this way, it hasn’t happened. Yes, low-end Android smartphones have become increasingly capable and cheap, but that’s disrupted almost entirely other Android smartphone vendors rather than Apple. Why? I believe there’s something about products which have strong personal associations — such as smartphones, cars, clothing, and other luxury goods — which makes them stubbornly resistant to low-end disruption. Our use of these products says something about us and using cheaper imitators may not convey the message we want. The job to be done of smartphones and other similar products, then, goes beyond their obvious functions and is another reason why “good enough” isn’t good enough for at least some buyers who can afford to be more discriminating. This continues to be one of many fascinating aspects of the smartphone market which separate it from the rest of the consumer electronics industry and continue to make it such an interesting one to follow.

Microsoft and Google First Party Hardware

On October 4th, Google is having an event, likely to launch both the Home, their Amazon Alexa competitor, as well as their own branded smartphone. It is safe to assume at this point that Google is getting more and more serious about Google-branded hardware in a number of categories.

Google’s efforts remind me of Microsoft’s as Microsoft has been making their own hardware like the Surface for a number of years now and is a serious contender and competitor in the PC and tablet category.

I feel it is worth taking a step back and making the observation that we have two companies whose very essence, from a computing platform viewpoint, was providing software for anyone to run on their hardware. For Microsoft, this was Windows and for Google it was Android. So why do two mostly software companies feel it is important to become a contender in hardware and compete with their customers and partners? The answer, in my opinion, is brand.

I believe, both Microsoft and Google believe their brand is strong enough to bring hardware to market and take share at a point in time when many markets are consolidating around hardware companies with strong global brands.

We are at a tipping point where PCs, tablets, and smartphones are seeing the percentage of white box, or no-name branded devices shrinking as a total of the segments sales. Consumers are less frequently buying the cheap, no-name brand and, instead, buying the brand that stands for quality and is worth investing in. This is as sure a sign as any of a market that is maturing and globally, we are seeing it take place, even in markets which we used to consider “developing.”

I’ve written frequently that I’m convinced brands will win the day in the global consumer electronics market. We are in the midst of a brand battle to see who is left standing. Many names we recognize making consumer hardware today may not be around in five years. Names like Xiaomi, Oppo, Asus, Acer, Micromax, etc. Or, if they are still around, they may pivot to be out of the large volume consumer categories and just operate in the fringe categories where they may be better suited to compete.

To see where we may be headed, just look around at any other non-tech consumer categories. Look at cars, fashion, consumer packed goods, even restaurants the masses frequent. The mainstream consumer buys from brands they recognize. Becoming a recognizable consumer brand is very difficult and out of many companies with a recognizable brand in tech, Microsoft and Google have a better brand than many of their partners and customers both globally and regionally.

For that reason, I am not going to write off either Google or Microsoft’s first party hardware efforts. It may be a rocky road and they may not get it right at first but, as they learn by shipping, they will remain well positioned as long as they are strategic about the categories they pick. For example, it makes sense tactically that Microsoft makes PCs but not smartphones that run Windows. Similarly, it makes sense that Google does not make PCs that run Windows or Android but instead, focuses on smartphones and perhaps the broader smart home. Being smart about which categories to enter and which ones to not enter, is key for any technology brand.

We will see what Google has up their sleeve and if we need to take them seriously. But I believe they have the type of brand which can make a go at hardware in a number of categories. Microsoft, similarly, is gaining share in the PC/Tablet category and, with another fall hardware event rumored, I firmly believe they are keeping their foot on the Surface gas pedal for good reason.

I can make a strong case that both these companies have a bright hardware future in front of them with brand being a strong contributor to the upside. While it is still early in many markets like AI, AR/VR, etc., I still maintain the stronger brands will win the day.

Apple Watch Speaks The Only Language Wearable Consumers Understand: Fitness

The iPhone is such a big part of Apple’s revenue that we have seen a lot of coverage and attention paid post-launch event to iPhone 7 and 7 Plus. While Apple Watch is nowhere near iPhone revenue yet, it deserves our attention because of the role it will play in Apple’s future.

Early Adopters’ Learnings

When Apple originally introduced Apple Watch, it focused on design/style, communication, and fitness. While design made Apple Watch stand out from the competition, I think it is fair to say it captured more tech adopters than it did jewelry buyers and fashionistas.

Communication was about two main things: notifications and digital touch. Notifications ended up being a strong driver of satisfaction but not necessarily of purchase. This is because it is quite hard to articulate how notifications can impact your phone usage and the value they bring to you. This is a feature that delivers different returns to different people that will be discovered as they use Apple Watch. For some, it is about being in control. For others, it is about being in the moment. For others, it is about never missing what is most important. Precisely because it is so personal, it is quite hard to pitch it to potential buyers, especially as many see their phone playing the exact same roles.

Digital touch was an attempt to broaden the way we communicate by adding more of a personal touch from a device that is the only one consumers see as more personal than their phone. However, the limited number of users early adopters could interact with, coupled with the fact that, more likely than not, people they wanted to interact with might not have had a Watch, a spose or a child, limited the appeal.

When all is said and done, fitness remains the strongest purchase driver for wearable buyers at the moment, especially as we expand beyond early adopters.

Doubling Down on Fitness is not a Change in Focus

Wearables are not a must have. I have been saying this since the very beginning of the market. This means consumers need to be convinced to invest in them. Fitness has been, from the beginning, what resonates with them because it is the obvious use case, compared to what could be done with smartphones.

77% of American consumers we interviewed in the spring said they bought a wearable device because of the step counting feature. Another 38% said they wanted a heart rate monitor and 36% wanted a sleep tracker.

Apple, by adding GPS and a swim proof design to Watch Series 2, combined with an improved CPU GPU and a brighter display, provides a solid upgrade for current Watch owners as well as a more attractive proposition for users who are either looking at upgrading from a fitness band or who are wearing a smart device on their wrist for the first time, especially given the $369 starting price.

With fitness at the center of Apple Watch’s line up, having a Sport edition no longer made sense. But adding a trusted sport brand like Nike to the portfolio makes a lot of sense particularly as the price of the entry-level Watch now starts at $20 more than the Sport edition did. As Apple did with the activity and workout features, with Watch Nike+ it tries to appeal to both serious and occasional runners with dedicated workouts. Apple’s gamification effort, which started with the badges users could earn, increases with watchOS 3 as users can now create groups they share, compare, and challenge in their achievements. While personally I am not a fan (mainly because I hate public shaming), the social aspect is certainly more rewarding for some than any badge of honor Apple could ever give them. The activity rings can also now be more central to Apple Watch with some new faces that display the information in a more effective way for users who really want to stay focused on their daily goal.

There is Luxury and then There is Luxury

Apple Watch buyers certainly appreciated the design, the quality of material, and the overall look and feel of the product. While they might have bought Apple Watch instead of another smartwatch based on looks, I am not sure many bought it thinking they were buying a piece of jewelry. As it is the case in the traditional watch market that Apple now measures itself against, there are different kinds of high-end watches. Apple repositioned its luxury threshold, going from the Gold Edition, priced at $10,000, to the ceramic edition priced at $1249. From an addressable market perspective, there is certainly a bigger segment for the ceramic edition than there was for the gold, especially as Apple is still working on establishing a more comprehensive brand status that includes more than just tech..

Hardware Only Tells Half the Story

Most of the learnings from the first Apple Watch release are best demonstrated by how the UI has morphed. As for the marketing messaging, Apple only tweaked what it had initially delivered with watchOS to improve the experience and widen the appeal.

Digital touch has now been integrated as an option to respond to messages in the same way it has been added to messages in iOS. It might just be me but, even the way scribble and digital touch have been added to iOS, it links nicely to the Watch, helping to socialise this way to express ourselves as well as widen the circle of people who can now receive and send heartbeats or kisses or fireballs or even a heartbreak. It sure is something my eight year old has happily embraced on her iPad.

Swiping, now part of our muscle memory thanks to iPhone and iPad, also plays a more proactive role in watchOS 3 as it is the case for the revamped launch screen. Force Touch is still there but is not highlighted – the same as for iOS.

After using Apple Watch Series 2 for over a week, it is the speed and the improved battery life I came to appreciate. While I have been waiting to be able to swim with Apple Watch (I wish it was available when I went on holiday), it is speed and battery life that positively impact my daily experience. The new GPU and CPU make a great deal of difference when launching apps and interacting with Watch. Apple built it and now I hope apps will come. This is still what I hope to see now that developers can no longer use the excuse of a sluggish OS that did not allow them to design Watch apps. Apple tried to kick things off with Breathe an app that aims at showing there is more to health than calories and steps. While I am still getting used to it and have it set for every three hours rather than every hour, I find that between stand and breath I am more conscious of how long I sit and how caught up into things I get and these help me take a moment.

With developers more likely to be waiting for a broader addressable market, I think we will see sales pick up, thanks to the lower priced but upgraded experience of Watch Series 1 now at $269 and the broader appeal of GPS and swim mode in Series 2.

It’s hard to see any other brand top Santa’s smartwatch list for this coming Holiday Season.

Unpacked: The Need for Ruggedized Smartphones

I’ve gotten a question quite a bit lately from tech industry folks about the need for more rugged devices. Smartphones that don’t break when you drop them or don’t stop working when you dump them in the pool or ocean, etc. How big of a feature is a water resistant, or even waterproof, phone in the future? Motorola pitched the Droid Turbo 2 as a shatterproof screen. How big of a selling point was this feature? How about something as simple as better battery life. How big of a deal would that be as a feature?

What is interesting to keep in mind is that features like the ones I listed above are infinitely more valuable once you have dropped/cracked your screen, or dropped your phone in water, or ran out of battery well before the end of the day. Consumers need to have felt the pain, if you will, in order to recognize the feature as valuable. This is not to say they won’t see a waterproof or shatterproof phone as something to invest in to be cautious but that it becomes infinitely more valuable as a selling point once you have experienced the pain.

With Apple and Samsung touting water resistance (and possibly some day waterproofing) and Motorola’s shatterproof screen along with future innovations from Corning that may offer the same feature on many phones, my curiosity got the better of me. I decided to poll consumers in our iOS panel to see if any of these smartphone mishaps have happened to them.

As it turns out, consumers are more clumsy than I thought.

Cracked Screen
Overall, 61% of consumers have cracked their screen in some way. 33% of those who cracked their screen didn’t crack it bad enough to necessitate a replacement screen and they indicated they continued to use it until they upgraded their smartphone. 28% said they cracked their screen and paid for a replacement for that device.

Even though my poll covered just over 400 consumers and iPhones only, for this demographic with a +/- of 4%, I’d suggest the data tells us a good portion of the market has been impacted by a cracked screen in some way and would see the value/benefit of a shatterproof smartphone.

Water Resistance
Not surprisingly, fewer consumers have been impacted by their phones being dropped in water than having their screen cracked. Altogether, only 41% of iPhone owners indicated dropping (and fully submerging their iPhone as a result) in water. 28% of those affected by a water hazard indicated they could and did continue to use their phone even after dropping it in water. 13%, however, were not so lucky and needed to purchase or acquire another smartphone to use.

No question, this is a great feature and safety precaution but, in terms of pain points, it seems screen damage has happened to more people than water damage. Now, waterproofing may be a different value proposition. While I’m sure Apple doesn’t recommend it and while the iPhone isn’t touted as being “waterproof”, which I interpret to mean OK to use in water and not just drop in water, I have been taking pictures and video underwater of my kids with the iPhone 7 Plus in our pool. I could see how a fully waterproof phone could be more attractive than just a water resistant one and hope Apple goes in this direction. Personally, I’d love to not need my GoPro. Perhaps that is just me.

Better Battery
In a surprise to no one, battery life is a more broad consumer pain point. While most of the market has had battery life issues of some kind, kids in particular (no kid at my daughters Jr. High go anywhere without a battery pack), I snuck a very distinct question into this poll on batteries. I asked specifically how many consumers have fully run out of battery by 5pm and not had a charger with them and thus were without a phone for a period of time. I appreciate that, with this question, I’m being extremely specific with the scenario. However, 58% of iPhone owners in our poll said they have experienced this exact scenario.

Year after year of battery life gains may be one of the most significant initiatives that manufacturers continue to make progress on. How, you may ask? The answer is silicon. I remember in the 2005/2006 time frame, Intel had an initiative called “eight hours in 2008”. Their goal was to get eight hours of battery life by 2008. If memory serves, this particular benchmark was not achieved until 2010 and still many PCs in the market don’t get 8 hours (but a great many do). Intel solved this through Moore’s Law as their processors became more efficient while still being powerful.

Apple is on a similar course, designing their silicon with efficiency while still being extremely powerful. This, plus lower power display innovations, are how we achieve even better than “all day” battery life in our smartphones and other gadgets. That and when we can officially get rid of old network technology like CDMA, GPRS, 1xrtt, Edge, etc, and move to full LTE. LTE, 4g and even more so with 5g, are extremely efficient on power when it comes to data.

We are a few years away from all the pieces to be in place for battery experiences to feel like a breakthrough. But when it happens, it will be a bold new world.

Google Android – Closed Source

I think Google may make Android proprietary in 2017.

Google has launched Android N but, without the ability to distribute updates, the software is virtually useless. To make matters worse, I think Google is effectively doing research and development where its competitors benefit more than it does.

Android M (6.0) is currently on just 18.7% of Google’s Android devices, despite having been available for almost a year. That corresponds to the penetration one would expect with virtually no updates being made. In contrast, iOS 9 is available on well over 90% of all devices and is about to be quickly replaced with iOS 10. This is a massive problem because it means any innovations Google makes to Android to compete against iOS, Windows or China will take four years to fully penetrate its user base. In my opinion, this renders the innovation worse than useless as it will be fully visible to the competition who can copy it and get it to market long before Google can.

A great example of this is Now on Tap which allows context based search from anywhere on the device. I have long believed this is a stroke of genius as Google currently only has 41% coverage of the Digital Life pie but this feature allows Google to collect data as if it owned 100%. The net result will be greater understanding of its users and better targeting of its advertisements — meaning higher prices, driving revenues and better margins.

Unfortunately, this service requires low level changes to be made in the Android Open Source Package (AOSP) so the device has to have version 6.0 (Marshmallow)or later in order for this service to work. Currently, only 19% of Google’s ecosystem users on Android have access to this feature despite it being available for a year. This, combined with the endemic fragmentation that hobbles the user experience relative to iOS, is a major reason why I think Google services in Android devices generate 50% of the revenue they do on iOS.

The only way I think Google can fix this problem is to take complete control of Android, culminating in the migration of the Android Run Time (ART) from the Android Open Source Package (AOSP) into Google’s own proprietary Google Mobile Services (GMS). This would render the open source piece of Android to being just a kernel with the real functionality of the device being controlled from within GMS which is fully under Google’s control. This would fix both the software distribution problem and the endemic fragmentation but could or would probably result in an outcry from the open source community as well as attract more scrutiny from regulators.

Google has long been an advocate for open source software and the backlash it would receive from developers when/if it moves in this direction would be severe. However, the recent loss in its war with Oracle has given Google the perfect excuse to close down its version of Android and blame Oracle when developers complain. In Marshmallow, Google has been forced to use Oracle standard libraries for the Android Run Time — Google has effectively lost control of the software roadmap for the runtime. This is something Google simply cannot afford and, when it presents its proprietary version, it can point the finger at Oracle as the reason for having to make the move. Furthermore, the AOSP will remain in open source but its relevance will have been reduced to being just a kernel rather than a fully-fledged OS.

There are already signs of this beginning to happen as Google is set to launch hardware which it has “put more thought into” and become “more opinionated about”. The Nexus line of devices is set to disappear to be replaced with devices designed by and branded with Google with the manufacturer (HTC) being completely absent.

For the creators of Android forks such as Alibaba, Xiaomi, Tencent, Cyanogen, and so on, this means they will also be forced down the same road, resulting in a series of proprietary operating systems all based on a common kernel. For developers, this will make their lives generally easier as developing apps for Google Android devices will become much easier but more work will be required to also develop for others. I suspect this will force makers of Google Android devices to take Google’s software as they have almost nowhere to go until the EU decides to step in and force Google to change its practices around how it licenses the right to put Google Play on devices.

This has been a gradual process as the scope of GMS has been increasing for the last three years but, at Google I/O 2017, I think this move will become much more visible. I think Google has very little choice because, at the end of the day, its fortunes will be driven by revenue growth from Android as iOS is grinding to a halt. I think a bit of developer anger is better than a $100bn loss in value.

Samsung’s Enterprise Mobility Strategy comes into Focus

I spent the best part of Monday in New York with Samsung, learning the latest on its enterprise mobility story. It’s a story that’s come a long way over the last five years or so, but it’s also a great illustration of how Samsung has continued to set its Android smartphone strategy apart from the competing Android vendors. It’s worth looking at how this approach has evolved over time, both as an interesting facet of Samsung’s strategy in its own right and in terms of what it says about Samsung’s Android strategy overall.

Knox in 2013

Samsung began its foray into the enterprise with its Samsung Approved For Enterprise (SAFE) program in 2012, with the Samsung Galaxy S 3. But it wasn’t until its Samsung Knox capability was announced in February 2013 things really began to take shape. The impetus behind Samsung Knox was a sense at Samsung that, though it had done very well in the consumer market, if it was to continue growing its smartphone shipments, it needed to break into enterprise. At the time, iOS devices had become the de facto standard in the world of bring-your-own-device (BYOD) deployments, while Android was still viewed with suspicion by most corporate IT departments. Google itself wasn’t taking this problem very seriously at the time – in fact, in 2012, it disbanded 3LM, a Motorola subsidiary formed by former Googlers and designed to make Android more fit for enterprise deployment. This was the strongest possible signal Google could have sent it wasn’t going to solve this problem on its OEMs’ behalf.

And so, Samsung decided it needed to take matters into its own hands, leading to the creation of Knox and its first capabilities in 2012, which were focused on dual-persona containers and certain other functionality, coincident with the launch of the Galaxy S4 in early 2013. This was basic functionality and certainly didn’t overcome all the concerns IT departments had with deploying Android devices, but it was a starting point for what’s come since.

Knox in 2016

Knox has been through quite a few iterations – six subsequent releases by my count – and is now a much more fully-fledged security solution than it was. Version 2.7 launched with the Note7 in the last month or so and brought a handful of new features with it. But the more significant evolution over the last three and a half years has been the transformation of Knox from a series of point solutions into a security platform. Knox is now baked into almost all mid-to-high-end Samsung smartphones at the hardware and OS layer and the basic functions are available to all Samsung smartphone users as a result. Those basic functions include hardening intended to prevent hacking or rooting of the OS and a variety of other features.

Enterprises, however, can add additional functionality across four key domains as part of paid offerings from Samsung:

  • Knox Workspace – which is the evolution of the first Knox product, a dual-persona container, now “defense grade” and certified by a variety of government agencies in the US and around the world
  • Knox Premium – a cloud-based end-to-end solution which enhances the basic functions
  • Knox Enabled App – an app-level containerization solution, which cordons off individual enterprise apps from the rest of the data on the device, while maintaining the look, feel, and functionality of the app
  • Knox Customization – a service which allows businesses to deploy a variety of Samsung devices, often tablets, in a variety of settings in which their functions can be locked down and restricted. For example, in kiosks, as point of sale devices, or as terminals for workers in various settings.

Another key element of Knox in recent versions has been an attempt to overcome some of the security risks associated with the slow roll out of new Android versions. Samsung has worked with Google and others to roll out security-specific updates separate from the major Android releases on a regular basis in order to patch vulnerabilities. In the most recent version, Knox also allows enterprises in some markets (though not yet the US) to determine exactly which version of the software should be deployed on its device fleet.

A broader transformation

Over the past year in particular, Samsung’s enterprise strategy in the US has undergone something of a transformation under new leadership, shifting from principally selling devices based on hardware features to selling solutions which incorporate devices, software, and services from Samsung. These solutions are intended to give businesses more of an end-to-end approach than Samsung has offered in the past. Google, which as I said earlier had largely ceded this space to others in 2012, has since stepped up and provided Android for Work as a base layer of security for Android devices in the enterprise, but Samsung has continued to innovate above and beyond what Android offers out of the box. Meanwhile, Samsung’s various Android competitors have tried their own approaches to enterprise solutions but these have either faded over time or remained far less functional than Samsung’s offerings.

Samsung as the Android default

All of this has left Samsung as the default option for Android in the enterprise. It’s the only Android vendor that appears to take deep security and other enterprise needs seriously and the only vendor which has dedicated significant resources to selling and supporting solutions in the enterprise. This mirrors its successful work over recent years to become the default Android vendor among consumers in key markets like the US, especially at the high end. This is Samsung’s strength – leveraging its scale and investment to dominate within the markets it seeks to play in – and it’s arguably paid off in a big way in the enterprise. The interesting thing about all of this – at least to date – is it’s mostly about positioning against other Android vendors rather than against Apple and iOS in the enterprise, though the latter are definitely potential future targets. Samsung’s efforts have been mostly about neutralizing the concerns and disadvantages associated with Android in the enterprise rather than necessarily about besting Apple. That may begin to change going forward, should Samsung decide to broaden its offerings further. Apple is obviously not standing still either, striking partnerships with IBM, Cisco, SAP, and others around the enterprise. But it’s increasingly clear that – from a smartphone perspective at least – these two companies will carve up the lion’s share of the enterprise market in the coming years.

The next challenge

The next big step for Samsung is to begin to make one of the hardest transitions of all for a tech company – to go from selling technology solutions to selling business solutions. That’s a subtle shift but it means really understanding and then transforming internal business processes and not simply offering technology products to meet specific technology needs. I’ve seen a variety of other tech companies – notably telecom operators – attempt this leap over time and it’s a tough one to make. It always requires partners who can bring both capabilities and credibility beyond those the tech companies themselves bring to the table. It also requires a major shift in mindset for sales teams trained to sell products based on features rather than their business transformation potential. The big question is whether Samsung can build the partnerships needed to achieve this combined credibility and whether it can drive the internal cultural change necessarily to sell and deliver these solutions. One of the hardest things of all is Samsung sells entirely through indirect channels rather than directly to enterprises, which will add another layer of complexity here. The other big challenge is, to the extent Samsung wants to work with multinational companies, its highly regionalized structure may prove a handicap – serving global customers requires a global structure for sales and support and that’s not the way Samsung is currently organized.

An ongoing evolution

Samsung’s strategy and positioning is by no means set here. Knox itself has been through quite a bit of transformation over recent years and its public identity is still a little muddled. Samsung has largely marketed the point solutions until now, which means different enterprises have different perceptions of what Knox really is and what it stands for. Samsung wants to begin to communicate a clearer identity for Knox in particular and its enterprise activities overall, but business marketing is notoriously difficult. There’s only so much ads in airport terminals can achieve. But Samsung does seem to be making some progress here and, in the process, is solidifying its lead as the Android vendor for the enterprise.

Reading Project Titan’s Tea Leaves

Immediately after rumors reported Apple had laid off a group of people involved with project Titan, I received calls from media who took this to mean project Titan is dead and Apple is no longer doing some type of car. The death of Project Titan is absurd. If they were going to kill it, why would they bring back Bob Mansfield to manage the project, one of the superstars at Apple whose most recent job was to manage all of their hardware.

What I think happened is, when Mansfield got to look at Project Titan, he quickly determined what the real goal of the project was and let people go who were working in areas that were no longer relative. I suspect that early on Apple hired a lot of people from many disciplines as they were researching this subject and playing around with what they should do if they had a product related to automotive. Knowing Apple as I do, I would not be surprised if they entertained everything from doing their own car to new ways to integrate their technology into new or current vehicles with the goal of providing more Apple services to one of the most mobile devices we have.

My personal belief is Project Titan is not about creating an Apple Car. It just does not make sense to me, given the fact that just about every major car manufacturer is already working on their own versions of a smart or self-driving car. However, if you look at what I believe the biggest opportunity in automobiles really is, you would see it would be to make existing cars smart or self-driving on their own.

I believe this is the most plausible explanation of what Project Titan is all about. At its simplest level, it could involve beefing up Apple’s CarPlay in vehicles that support it now, as well as finding a way to add it to existing cars of all makes so any user could have Apple Play in their car and tie them to Apple services. But there is another possible moonshot idea I find most interesting and it is what I consider a holy grail of autonomous vehicles.

If you have seen one of Google’s autonomous vehicles, you would notice two things. The first is they have taken an existing car and given it intelligence, sensors, and many cameras so it can operate as an autonomous vehicle. The second thing you will notice, since it stands out in a big way, is it has on its roof a 360-degree camera that is large and ugly. Clearly not created to make the vehicle design sleek or attractive.

What if Apple could create a “kit” or special self-driving package that is relatively easy to deploy at a dealer or by a mechanic that could integrate those sensors and cameras into an existing car. It would include a dash-attached iPad to deliver the kind of intelligence and navigation needed to operate a self-driving vehicle, as well as Apple CarPlay. Instead of an ugly camera on top, Jony Ive and team could design a very sleek attachment for the roof in neutral colors that has cameras in it to handle more of the sophisticated visual needs for a self-driving vehicle.

Think of what a gold mine that would be. Even if it is expensive to add this feature to an existing car, it will be much, much cheaper than buying a new autonomous driving vehicle from major car companies. And it allows people to take their own cars and retrofit them for what appears to be the future of personal vehicle transportation.

Of course, the other benefit is Apple brings more people into their services ecosystem and allows them to grow their services business well beyond what they can with just iPhones, iPads, and Macs.

I realize doing this type of “kit” is not easy but, given the fact Google has already done this to an existing car, it is in the realm of possibility Apple could do it as well and do it much better.

BTW, regardless if Apple is the one to deliver something like this, I believe retrofitting existing cars to make them self-driving will become one of the biggest automobile businesses in the next 5-10 years.

So if Apple did something of this nature, what type of timeline could we expect from them to deliver a product like this? My belief is they would probably do it in two stages. The first stage would be a retrofit kit to add Apple CarPlay to any vehicle. It would need to include an iPad that is dash-attachable and a better way to integrate into a vehicle’s existing entertainment system. I would not be surprised if they could get something like this to market in late 2017 or early 2018.

As for an autonomous driving retrofit kit, there is probably still a huge amount of work before they could deliver something like this but a good guess, and it’s just a guess as to when they could get something to the market, would be 2020 or somewhere around that time.

This is pure speculation on my part but I have never believed Apple was going to do a car of their own. It would be smarter to buy Telsa than to go down that path. On the other hand, it is highly plausible Apple could deliver a CarPlay retrofit kit for existing cars and a self-driving retrofit kit for existing vehicles sometime in the relatively near future.

What the iPhone 7 says about Apple’s Future Augmented Reality Plans

I believe Apple’s next big iPhone release is going to feature augmented reality technology. Obviously, nobody at Apple has said anything about such a product. But the now shipping iPhone 7 Plus, complete with dual-camera technology, is the latest hint Apple is moving in THAT direction. This, along with several high-profile company purchases—Metaio in 2015 and PrimeSense in 2013—point to this technology eventually appearing in products. One more thing: Apple CEO Tim Cook can’t stop talking about how big an opportunity augmented reality represents.

Dual-Camera Technology
Apple executives spent a great deal of time during the recent iPhone 7 launch event taking about the current and future capabilities of the two 12 megapixel cameras integrated into the iPhone 7 Plus. The first camera is a 28mm-equivalent lens most would consider a wide angle (the iPhone 7 has the same camera). The second is a 56mm equivalent and, while Apple’s Phil Shiller kept calling it a telephoto lens, the reality is its’s actually more of a portrait lens. In addition to giving the phone an effective 2X optical zoom, the dual cameras enable a long list of software capabilities that should result in notably better photos for most users. That’s interesting (and very useful), but what interests me more about this hardware is the fact Apple could use the dual cameras to capture information about the objects and space in front of the cameras. Two cameras allow the device to capture and create depth-mapping information.

Earlier this year I wrote about Lenovo’s Phab Pro 2, the first Tango-enabled smartphone. Tango is Google’s handheld augmented reality platform. The Lenovo phone actually uses three cameras and a host of other sensors to capture motion, depth, and local area information about the phone’s surroundings. The result is a device that knows where it is in space, which lets you do many fascinating things in augmented reality. To vastly oversimply, think about what Pokémon Go does on your current smartphone but many times smarter and more powerful.

Two Key Apple Acquisitions
There are two companies Apple purchased recently that make me think they’re moving toward this handheld augmented reality future: Metaio and PrimeSense. Apple bought Metaio in May 2015. It was a German company, started in 2003, that sold software that pulled together camera images with computer-generated objects. Before the purchase, numerous companies used the technology to create applications for use in different vertical markets including retail, industrial, and automotive. After completing the purchase, Apple took the product off the market. Many assume Apple is working to create custom silicon in conjunction with this software for future products.

PrimeSense was an Israeli 3D sensor company Apple purchased for $345M in 2013. The company had a mobile-sized 3D sensor (code named Capri) that worked with its software technology to scan and captures three-dimensional objects. The company was also a founding member of the open source framework called OpenNI (Natural Interaction) designed to capture body motion and hand tracking. Stories at the time of the acquisition noted the Capri sensor was relatively expensive to produce which kept it from gaining the attention of other device manufacturers. This is the type of tech Apple loves to integrate into its hardware to drive additional differentiation from the rest of in the market.

Cook on Augmented Reality
Finally, there are Tim Cook’s comments. In the most recent Apple earnings call, he pointed out that, while he thinks both virtual and augmented reality are interesting, he sees a much bigger opportunity in augmented reality, especially regarding commercial use cases (I strongly agree). More recently, in an interview on Good Morning America, Cook once again spelled out his enthusiasm for augmented reality. He noted using AR, two people can share a common experience, which is hard to do in the heavily isolated world of VR.

Now, it may well be that Cook’s recent comments around AR versus VR are meant to throw people off Apple’s trail. In fact, I think it’s highly likely Apple is working on both technologies and, eventually, we may well see some eyewear that utilizes not only the technologies above but a future version of the company’s new W1 chip currently shipping in the AirPods. But my bet is, when Apple heads toward its next big hardware revision, we’ll see an iPhone (and maybe an iPad Pro) with AR capabilities.

Unpacked for Friday September 16, 2016

Samsung officially recalls the Galaxy Note7 in North America – by Jan Dawson

Samsung has finally issued an official recall of the Galaxy Note7 phone in North America, working with the US Consumer Product Safety Commission. This follows several weeks of informal recall activity, with Samsung encouraging owners to return their devices to retailers in exchange for refunds or alternative devices. The official recall formalizes the process but will hopefully also raise awareness, as only a small portion of owners have complied with the instructions to return their devices up to now.

The whole Note 7 problem couldn’t come at a worse time for Samsung. It has completely hamstrung its ability to compete with Apple’s new iPhone 7. Rather than being out for several weeks ahead of Apple’s devices and benefiting from the strong reviews, Samsung is now unable to sell any Note 7 phones through the pre-order period for the iPhone 7 and much of the first week of retail sales. That’s going to put a big dent in Note 7 sales and, given Samsung will have to focus on replacing devices already sold when its inventory starts to ramp up again next week, it’s quite possible the phones won’t be available to first time buyers for some time to come.

To its credit, Samsung acted quickly once it was clear there were significant and widespread problems with the devices but its recall hasn’t been successful in getting people to return them in large numbers. At the same time, Samsung’s messaging around the recall has been inconsistent and even misleading at times, with them at first promising new devices would be available within days but, more recently, pushing the date back to September 21st.

The bigger damage – as Tim Bajarin wrote earlier this week – is the lasting damage to the brand. Samsung’s reputation for customer services has taken some knocks previously across its various product lines and, without a direct face to customers in the US equivalent to Apple’s retail stores, it struggles to communicate with customers and give them somewhere to go with their concerns. Instead, those concerns have to be managed by wireless carriers and consumer electronics retailers, most of whom have no incentive to sell customers another Note 7. Many customers will end up buying other devices, including iPhones. Obviously bad news for Samsung.

There’s no doubt Samsung will recover eventually, though there will be some after effects. But when it comes to Note 7 sales, it’s now a certainty Samsung will sell far fewer than it would have hoped to and the costs of the recall and the lost sales will be significant. That’s particularly sad given Samsung has largely righted its smartphone ship recently and begun making some progress in terms of sales and profits. These problems will now set those efforts back for at least a quarter and probably more.

A Tough Week for Wearables – by Carolina Milanesi

Unless you are Apple, Fitbit, and maybe Samsung, this week seems to have been a tough week for wearables. Two unrelated news items point to an industry struggling to make this new set of devices really compelling to consumers.

Will We See a Microsoft Band 3?

On Wednesday, ZDNet reported the team working on Microsoft Band to run Windows 10 is no more and there are no plans to bring to market a new Band this year. Microsoft then issued a statement saying it will still support its Health platform and will continue to sell Microsoft Band 2. If this were not enough reassurance, on Thursday Microsoft changed the name of the health app on iOS, Android and Windows Phone to Microsoft Band.

Not seeing a Band 3 by the end of the year would not necessarily be a bad thing. Microsoft is clearly not interested in the wearables market as a way to drive revenue from hardware. There are two things that matter to Microsoft: showing off the cloud platform and engaging consumers with a much more personal device than a Surface. While the first one is a clear asset and one that is appreciated by the current Microsoft Band owners, in order to take advantage of it from a big data perspective Microsoft needs a larger number of users than what it currently has. However, the design of the Band did not appeal to many users and the sophistication of the data provided is more than most current consumers looking at wearables are interested in.

In other words, Microsoft Band was too early for the market. Consumers are certainly interested in health and fitness, as we have discussed on a few occasions over the past weeks, but their requirements are actually pretty basic. A recent survey we ran in the US points to the vast majority of consumers wanting information on their caloric intake and activity level and not much else beyond that.

Microsoft could go back to the drawing board from a design perspective but educating the market requires a lot of effort. Effort it might leave to its competitors and then come in and say, “We can do that and more.”

I very much doubt Microsoft will entirely abandon wearables as they offer a good platform to Cortana, especially as Microsoft cannot rely on its own mobile phones and data points to a low uptake of Cortana on non-Microsoft smartphones.

No New Devices Coming from Google’s Key Android Wear Partners

LG, Huawei, and Lenovo all confirmed to CNET this week they are not planning to launch any new wearables before 2017. I guess they do not expect consumers to flock to the stores over the Holidays to buy smartwatches. As I discussed in my IFA article last week, it was quite telling that a show that has been the stage to important wearables launches in the past only had Samsung unveil a new product this year and it does not run Android Wear.

Sales outside of Apple and Samsung have been pretty limited as have been Android Wear developments. It seems to me Google has bigger fish to fry with Home and Daydream – things consumers are actually excited about – than trying to convince consumers they really need to buy a smartwatch. I would also argue that, considering how tepid the response from developers has been on WatchOS, it does not bode well for the Android Wear camp. Historically, developers have chosen to go iOS first and, with the new enhancements coming from Apple Watch Series 2, I would expect them to speed up their development there. So, even if Android Wear could improve, users might be faced with limited apps that would curb the value of a smartwatch over a fitness band. A fitness band which, on average, is $100 or more cheaper than a smartwatch.

Given how slow the uptake has been, unless you have resources like Samsung, you need to pick what you are focusing on and vendors are wise not to rush to deliver something for the Holidays that is not that different from what they had last year. Between Apple and Samsung’s marketing, their effort would be more than likely wasted.

The bigger question is whether or not waiting for 2017 will make the market any more receptive. Most vendors believe having cellular connectivity will make a big difference in uptake but I remain highly skeptical this is what consumers are really waiting for.

Evolving Apps vs Replacing Apps

Monday presented us with an interesting juxtaposition: Facebook Messenger head David Marcus spoke at TechCrunch Disrupt and said their initial vision for bots was overhyped, and Apple opened up the iMessage App Store in preparation for the release of iOS 10 on Tuesday. The response to these two message-centric initiatives is illustrative of the challenges inherent in attempting to replace the existing app model versus merely evolving it.

Facebook’s bot vision gets a reboot

Facebook announced bots for Messenger at F8 earlier this year and tapped into what was then a lot of hype about bots as a possible replacement for apps, especially when taken together with Microsoft’s “Communication-as-a-Platform” announcements. But at TechCrunch Disrupt, David Marcus said the following (I’ve transcribed these from the event video, and edited it slightly in the process):

When you want to build an ecosystem, and bring a lot of developers to the platform and bring users to the platform, and at the same time reinvent the user experience, it takes a lot of time… We have over 34,000 developers on the platform, and they’re building either capabilities for third parties or actual experiences. The problem was that it got overhyped very quickly, and the experiences weren’t able to replace traditional apps. What we’ve done in the last couple of months is that we’ve invested in additional capabilities and provided a lot of guidance to developers on how to create successful experiences.

The fundamental problem Facebook faced in launching its Messenger bot strategy was it was pitching it as a replacement for apps but it simply wasn’t ready (something I wrote about at the time). The interfaces didn’t allow for complex interactions and, therefore, were unable to substitute for many of the things people use apps for. Part of the problem is the oversimplification of how bots are often described in the West in relation to how messaging has absorbed app functionality in Asia – major Asian messaging apps incorporate web views and many other app interface elements in a way Messenger’s bots interfaces simply didn’t. Leaving aside the cultural and historical differences between these markets, Facebook wasn’t even using the same elements to achieve the same results. It’s clear from Marcus’s remarks this week that Facebook understands this and the new version of the bot platform incorporates web views and other elements necessary to make these interfaces more functional. But what this ends up doing isn’t so much replacing the app model as recreating it within the context of messaging. That’s worked for many interactions in Asia and maybe it still will here, but that’s a far cry from the original vision of conversational user interfaces alone replacing apps.

That didn’t prevent developers from building on the platform. In the quote above, Marcus cites 34,000 developers who have done so, although not all have actually created bots. But the success, as Marcus also noted later in his remarks, has been limited to certain verticals, notably news.

iMessage apps get going with a bang

By contrast, Apple announced its updated iMessage platform at WWDC in June and this week we saw the launch of the iMessage App Store and iOS 10, making these features available to users. I have my iPhone set to update my apps automatically and generally ensure that updates happen every day. On Tuesday, I had over 40 apps with updates, the vast majority of which were iOS 10-related. Many of those in turn adding functionality to Messenger. Games now have sticker packs for iMessage, payment apps were making themselves available within both iMessage and Siri, and apps from IMDB to Evernote were providing extensions for iMessage to allow their content to be more easily shared. In the time since then, I’ve seen even more apps go down the same route and a quick glance at the App Store suggests there are many other apps from big names jumping on the iMessage bandwagon.

What’s critical here is Apple isn’t trying to replace apps with its iMessage platform, but rather is extending those apps into new parts of the operating system, outside of their app icons. And because it isn’t promising to substitute for apps, it has many more developers on board because it’s not asking developers to build for an entirely new platform but merely to leverage the work they’ve already done and the user bases they’ve already built. I’ve written previously about how much Apple has asked of app developers over recent years with a combination of new hardware and big new features for its major operating systems but, despite all that, these developers seem to see value in building new functionality for their apps to be available in Siri, iMessage, and Maps.

Apple’s vision is much more in keeping with how users want to engage with messaging in Western markets, while also embracing some elements of how Asian messaging apps are used with features like stickers. As a result, I’d argue it’s seen a lot more success than Facebook with its bot-driven vision. I’ve no doubt the enhancements to Facebook’s bot platform will drive more interest from developers but, at this point, developers can choose whether to extend existing apps into iMessage, and thus tap into the most lucrative base of smartphone customers in the world, or to start from scratch with a bot strategy within Facebook Messenger, with little evidence that bot strategy is working.