Apple’s Emerging Market Challenge

Last week, Apple reported that sales of iPhones were down in the last quarter vs a year ago and, from the numbers, it was clear Apple’s iPhone growth has peaked. Apple’s forward-looking predictions for the next quarter show iPhone sales will also be down over the same quarter last year. As a result, their stock took a significant hit, although most of the financial analysts saw these numbers as the new normal and almost all put out a buy recommendation suggesting the worse is behind us.

The one bright spot was the news their services business was stellar and, in fact, is now bringing in more revenue than the Mac business does. Finally, the financial analysts see Apple’s services business as a big deal and are factoring that into Apple’s long-term growth.

However, the big question seems to be, could Apple ever really grow the iPhone market again? It is clear now the introduction of the iPhone 6 was an anomaly in that the larger screen met with huge demand as a large pent-up audience wanted a bigger screen. In that sense, this giant bump in sales was a one-off event and, as we now see, it would have been almost impossible for Apple to repeat that type of sales bump with the iPhone 6+ series. It appears the 4 inch iPhone SE will meet the needs of quite a few buyers who felt the 5 and 5.5 inch screens of the iPhone 6 series were just too big but even this demand is not enough to get them back to the huge growth numbers of the past.

There are two technology boosts I see coming that include dual cameras and their entry into VR with a VR-dedicated iPhone that could revitalize iPhone demand. A VR iPhone could actually be the one that forces a huge upgrade at some point. I will address these opportunities in a separate piece in the near future.

However, there is one specific area Apple could target that could cause a bump larger than the one they got with the iPhone 6 series — going after the market in India. Today, Apple has about 1% of that market. At the same time, only about 25% of the Indian population (1,276,267,000 people) has smartphones. That leaves a market of just under 1 billion who could buy a smartphone. Today, most of them sell for under $200 with some selling for around $100 or lower. Only the wealthier folks in India can even afford an iPhone.

We have been doing a lot of research in India these days and see that, while Apple is a known brand in India, there is not the same interest or demand outside of the upper crust audience. This is in huge contrast with China. Apple is considered a luxury brand in China and even those just entering what we call the middle-class lust after an iPhone. In fact, owning an iPhone changes a person’s status within their peer group.

But in India, it is price that matters. Indians are much less focused on status and brands as they are focused on getting the most bang for the buck. Tim Cook said in the earnings call that Apple has their eyes on India and “India is today what China was 10 years ago.” That is true in two key areas. First, India is rapidly changing and tech has had a big influence on these changes. Access to the internet is getting better and being used for new levels of communication and commerce. This has influenced the second key — it is helping people get better jobs, a better education and to raise their earning potential. This is being done on a small level today and why it is similar to China 10 years ago.

However, economists who follow India suggest the impact of a potential rising middle class in India could be huge for its people and its economy. But they also peg this rise to people who have access to the internet and can use it to get a better education, manage their farms and markets, get better jobs, and potentially make more money.

Although Apple could wait another four or five years to jump into India big time and hope a rising middle class could afford an iPhone, I would think they would want to be a part of helping this market grow and make a major contribution to the overall economic development. But to do so, Apple would have to create an iPhone just for the Indian market at a price that would be acceptable to this very cost conscience audience.

Given Apple’s current iPhone premium pricing, this presents quite a conundrum for them. That pricing will continue to only attract the rich in India. On the other hand, if they were able to create an iPhone with more flexible pricing, they might be able to get the mid to upper end of this emerging middle class and, over time as more join this range of consumers, they could grow that market substantially.

From our research in India, pricing needs to be in the $239-$279 USD range. This is the price point the mid to upper middle class seem to be able to tolerate. Of course, the problem is that there are a lot of smartphones in the $149-$229 range that are actually very full-featured and even this group buys the “more bang for the buck” smartphones when it makes sense.

This is the challenge Apple faces if they really want to grow their iPhone market substantially in India. Whether they do it now or later, this is the pricing challenge they will face. They could hold off and let others take a stronger aim at this group of users and hope to jump in later. They could also start a major branding campaign in India and make this market more receptive to an Apple product within a year or two and hope to make it into some type of status device.

It is not clear how Apple will attack this market but, from the research we are getting from India, it seems getting a bigger start now would really help them gain momentum as this market develops through better wireless networks and an audience of users who are starting to move up in society and will want a top notch, high-powered smartphone.

Earnings Season: Headwinds and Drivers

We’re now most of the way through earnings season, at least as far as the biggest tech companies are concerned. It’s been an interesting one – some of the largest companies have posted disappointing results and seen their stock hammered for it while a smaller number have blown through expectations and seen their stock rise. As I’ve analyzed these companies, it’s become clear that a large part of what’s happened, not just this quarter but over the longer term, comes down to the headwinds each company faces, the drivers of growth it’s managed to tap into, and the balance between these two things. What’s interesting is, even though this is a fairly diverse set of companies, many of the headwinds and drivers are common across multiple companies. This analysis dives into these and suggests implications for the businesses and their future strategies.

Headwinds

The major headwinds in evidence this quarter include:

  • Currency fluctuations, especially the devaluation of other currencies relative to the US dollar
  • Maturity and/or decline in major established consumer hardware categories – PCs, tablets, and smartphones
  • The US shift from subsidized devices to smartphone installment plans, which is slowing sales in US and bifurcating the market between cheap and premium
  • The growth of Chinese smartphone brands
  • The increasing strength of local competitors
  • The decline in consumer willingness to pay for software
  • The shift of ad spend from desktop to mobile and the associated lower prices, along with a more fragmented opportunity
  • The move away from one-off purchases for content

These headwinds have affected different companies in different ways. The table below presents a simplified version of how they map onto a set of major tech companies. The simple checkmarks mask different degrees to which these companies were affected by these factors, but we’re going to stay at this high level for the purpose of this post:

Headwinds

As you can see, some of the companies that suffered the most this quarter are those which are suffering from the greatest headwinds, including Apple and Microsoft. On the other hand, there are companies, such as Twitter, which aren’t necessarily suffering from many of these headwinds, but are suffering more from internal problems, such as the lack of user growth and the inability to drive sufficient scale to attract advertisers.

Drivers

On the other hand, there are also several common drivers that affect these companies. These include:

  • The explosion of enterprise cloud services and, to a lesser extent, consumer cloud services
  • The growth of subscription content services
  • The ongoing expansion of the mobile market
  • The shift from search advertising on desktop to native/in-stream advertising on mobile
  • New consumer hardware categories including wearables, home automation, and VR/AR

And again, these helped various companies to differing degrees this quarter as well:

Drivers

Here, too, some of the companies that did the best this quarter have some of the biggest drivers working in their favor, notably Facebook. However, you can also see the two companies that struggled a little this quarter – Alphabet and Apple – also have several drivers working in their favor.

Balance

Ultimately, long-term results aren’t about the number of either headwinds or drivers working for or against each company, but the balance between these forces acting on the different parts of their business.

Microsoft has struggled over the last several years because significant parts of their business are suffering from several major headwinds, notably the broad shift to mobile, the maturity and decline of PCs, and consumers’ increasing unwillingness to pay for software, while only benefiting from one major driver: cloud services growth. Facebook is doing well because it has latched onto several major drivers, including the explosion of the mobile addressable market and the growth of native mobile advertising, and is exposed to few of the headwinds that affect companies with large legacy businesses. Apple is particularly interesting here because it is exposed to several of the major headwinds in a big way, most importantly the increasing maturity of the three major device markets in which it competes – PCs, tablets, and smartphones – but has managed to grow anyway by taking share in at least two of these markets. This quarter was markedly different because it was the first time Apple failed to grow any of these three businesses. A big question for Apple is whether it can continue to grow any of these businesses over the long term as the markets they’re part of begin to stagnate and decline.

For those businesses that are struggling, the key is to find ways to latch onto the major drivers of growth where they’re not yet benefiting, in ways that complement and fit with their existing businesses. Companies can’t instantly create decent-sized mobile native advertising businesses, but there may be ways to launch or acquire capabilities in, say, wearables, or subscription content services that can help to drive growth. A few of the headwinds and drivers are directly tied to each other in that they’re two sides of the same coin, and some companies may consequently benefit from increasing their exposure to the upside of some of the drivers as a hedge against the downside risk associated with some of the headwinds. Individual earnings seasons come and go but, in the long term, companies’ ability to thrive and please their shareholders will depend on their success in achieving a proper balance between drivers of growth and the headwinds they’ll inevitably face.

Apple’s Uncharted Territory

The more I have reflected on what is happening right now in the consumer technology industry, in particular Apple’s position, the more a key observation hits me. Apple is in uncharted territory. I don’t mean because sales of iPhones are slowing or because their stock has a more skeptical sentiment around it than in years past. What I mean is Apple has never had a product like the iPhone that has reached such enormous scale, thus increasing their customer base to levels few thought Apple would ever reach.

Apple certainly had a mass market hit before the iPhone in the iPod. However, the iPod’s best year of sales was 2008 with 54.83 million units sold. The iPod helped Apple’s core customer base reach somewhere between 125-175 million users. The iPhone has taken the company to an entirely different level. Last year, Apple sold 231 million iPhones. The iPhone is the greatest hit Apple has ever had and their customer base grew from under 200 million users to now over 600 million. Apple has never had this many customers before. More importantly, they have never had this diverse a customer set. That is why they are in uncharted water.

I’ve pointed out how the iPhone has spanned the entire customer spectrum. It is owned all the way from early adopters to technology laggards. A rare few products can make such a claim. However for Apple, owning such large chunks of consumers across the spectrum is bringing new challenges by way of patterns of their customer’s behavior they have not seen before. Apple learns new things about their customers on a regular basis and has to plan and act accordingly. Nowhere is this better observed than with their customer refresh rates of new iPhones.

During their main growth period, Apple had been operating under a cycle of rather consistent upgrade patterns for iPhones. Depending on the specific country, the upgrade rate varied between 19 and 24 months. However, as Apple began to attract new customers, particularly those on the later end of the technology adoption spectrum, something interesting happened. Note this from Kantar Worldpanel on the US smartphone market on Apple’s launch of the iPhone SE:

The move should, first and foremost, appease Apple’s user base, 58% of which still owns an iPhone 5s or older. The average lifecycle on these iPhones is 27.5 months, longer than the overall smartphone market at 20.9 months, suggesting that up until now these iPhone owners have been hesitant to upgrade. This is either because they prefer a more compact iPhone, or because they are not interested in investing in the new models.

The key stat here is how iPhone 5s owners tend to hold onto their devices longer than the average. This is a new customer insight and, in this case, it is unique to a specific model of iPhone. There are significant portions of Apple’s installed base who do not behave like most other iPhone owners and this could prove quite challenging. Particularly as Apple looks to drive replacement rates within their installed base.

Prior to this last quarter, Tim Cook always gave us a statistic letting us know how much of the iPhone base had upgraded to an iPhone 6 or higher. For a few quarters this number grew steadily but now it is slowing. Apple has succeeded in getting most of their early adopters and early majority customers to upgrade to new devices but their laggard base is clinging to their older devices and seeing no need to upgrade.

I was at the Jazz Fest last Friday in New Orleans and did not miss an opportunity to make small talk with folks about their devices. At an event like Jazz Fest, we see a large representation of the mass market. I was surprised to see how many iPhone 5s are still in active use and even more surprised to interview so many consumers who see zero reason to get a new iPhone. Looking at some fresh research from our most recent US market smartphone study, 26% of current owners of a 5s or later have no plans to upgrade their smartphone in the next 12 months. While not entirely defined by the iPhone models they own, Apple has a sizeable user base that is likely to exhibit upgrade patterns the company has not encountered before.

The plus is these will remain loyal Apple customers and continue to spend money in Apple’s ecosystem. The negative is they will be unusually slow to buy new iPhones which, in turn, will have an impact on annual iPhone sales. In some ways, I’m not sure even Apple understands yet just how stubborn regular consumers are when it comes to replacing their stuff. And Apple has a lot of these regular consumers as their users and they may likely behave in unpredictable ways Apple can’t anticipate.

In this post from 2014, I made the case we should understand tech history is being made, not repeated. This is as true today as it was when I wrote it. For 25 years, we had a technology industry but only for the past 10 years have we had a true global scale consumer technology industry. This point continually goes underappreciated and misunderstood but it is central to acknowledge as we learn new things about consumers in this still young consumer technology market. These dynamics are new to Apple as they are to many companies competing for consumer dollars. At the end of the day, this diversity of global consumer behavior is what makes studying them fascinating and intellectually stimulating. But I sympathize with the companies for whom, I’m certain, these customers will drive crazy.

The Challenges of Retail for Startup Hardware Companies

In a recent column, I covered some of the challenges entrepreneurs have in getting their products into retail. This column explores the issue in more detail. It’s based on an interview I had with James Berberian, an experienced sales executive who has sold consumer electronics products for his entire career, including nineteen years with Targus, where he grew their worldwide sales to over half a billion dollars in 2011, the year that he left as VP sales.

P: What are some of the misunderstandings an entrepreneur in a new product company has about retail?

J: Companies introducing their first new product are usually unprepared and don’t realize how difficult selling a product can be. Even after they make their first sale to a retail account, they’re about to discover the job of selling has just begun. Retailers expect the company to generate demand for their products. And getting the product to sell at the retailer is the only measure of success.

Most companies underestimate the cost it takes to sell a product. They’re excited the store’s buyer likes their product and has put it on the shelf but that means little. They need to be prepared to spend anywhere from many tens of thousands to a few hundred thousand dollars.

P: What is the state of the retail environment with regard to selling consumer electronics and what has changed?

J: Consumer Electronics at retail has consolidated tremendously. Circuit City, CompUSA, Good Guys, Lechmere Sales, The Wiz, and many more big box stores (large chains) have gone out of business. While some of this has been caused by online sales, particularly Amazon, it’s also the result of a new, younger generation of consumers more tech savvy who don’t need a salesperson to explain the product.

Online resources such as YouTube, product reviews, and gadget blogs help with the purchasing decision and explain how to set up and use the products. Customers are much more comfortable buying without seeing the product first. Online has become more convenient and has eliminated all the risks of buying sight unseen. Amazon refunds your money back the day a return is shipped. And many buyers know more than the salesperson and prefer not to interact with others for assistance.

P: Who are the five most important retailers today?

J: Best Buy, Wal-Mart, Target, Costco, and Amazon

P: What are some of the costs a product company faces in going retail?

J: Packaging costs, point of purchase displays, and distribution costs. Retailers often want the package to be designed just for their store. A product selling for $100 or more often requires a display. They’re expensive to build, set up, and maintain. Distribution costs are higher.

If a company wants to get their product into retailers and has only one or two items, they’ll be directed to a distributor, who already has a relationship with the retailer. The distributor’s role is to inventory, take orders, ship, process returns, and invoice the retailer. For this, the distributor charges 5% to 12% of the selling price. Lastly, a company will need a rep firm or sales staff. Rep firms, consisting of an independent sales force, are usually the most effective at gaining sales without the overhead costs of a large sales staff. They will take a 3% to 6% commission.

P: Should a product company try to sell to as many retailers as possible?

J: No, to be successful at retail and protect your selling price, don’t have a goal to sell to everyone. Focus on selling to several strategic retailers. If you sell to everyone, your price point is degraded, your major retail partners will want to differentiate, and will often replace you with one of your competitors. Retailers don’t want to sell the same product as their competitors. While it’s counterintuitive, you can end up selling less when you have too many retailers.

P: What sort of profit margins are needed by the retailers? In other words, what does my product need to cost me to sell for $10 at retail?

J: Accessories require a 65 to 50-point margin (retailers pay $3.50 to $5 for a $10 retail product), hardware requires 35-45 pts., and a powerhouse brand such as Sony or Apple, require an 8-15 points margin. When you add distribution and rep commissions, a hardware product that sells for $10 needs to be made for as little as $2.50 or $3.

P: What are some of the biggest surprises in working with retailers?

J: A retailer may ask product companies to take back a competitor’s inventory as a condition for selling their product. They may force a company to take back the first order if the product doesn’t sell.

If a product has no intellectual property or patents and it’s successful, the retailer may copy the product and sell it under their own store brand. Designing and building products used to be a specialized skill, requiring a relationship between a brand and factory in China. Now, virtually anyone can get products built. Retailers are very aware of what products cost and, of course, they know what sells, as they are closest to the consumer. (Start-ups worry about the Chinese, but they should worry more about the retailers.)

P: Is selling a product in the Apple Store a big benefit?

J: Yes. It adds credibility. Buyers from the large retailers notice and it can help you gain distribution. But there is a downside. If a product is sold in an Apple store and then is removed within 60 days, buyers assume it has failed.

P: With all these challenges should a company ignore retail and focus on just online sales?

J: Yes. Margins are a lot less using retailers than selling direct online. Retail adds many extra costs and lots of frustrations. But most consumer product companies cannot afford to ignore retail. Retail today is still the 800-pound gorilla and moves a huge amount of product, much more than online does for most categories. There are many customers that still want to visit a store and touch the product, as well as a surprisingly large number that don’t have a credit card and cannot buy online.

P: Finally, what advice do you have to an entrepreneur who has had a successful Kickstarter campaign, has developed the product, and now is faced with selling the product to consumers? What are his priorities?

J: The very next thing I would do is to find a way to reliably source the product and hire independent rep firms to cover the country. A company can have 20-25 sales reps on location selling their product for no out of pocket cost until sales are made. Instead of spending on costly ads and rarely impact sales significantly, spend on a retailer’s program to make them successful in selling through the product.

Unpacked: Opinions on Facebook

Facebook is one of the few bright spots, earnings-wise, in the industry this year. It is becoming ever more clear Facebook will be the largest beneficiary of the global smartphone roll-out and that mobile advertising, like desktop ads, is turning out to be “winner take most” environment. That winner is Facebook.

As dominant as Facebook is, we still wanted to question our panel and see what kind of opinions and sentiments existed about the service. This is what we found:

– 38% get annoyed at the volume of ads and sponsored posts
– 38% said they use Facebook less today than when they first got on the service
– 31% said they were concerned about their privacy as it relates to Facebook
– 30% agreed the Facebook service is getting worse
– 22% consider Facebook to be invaluable to keep up with friends and family
– 21% said they feel Facebook is a waste of time
– 19% said they find themselves using Facebook more today than when they first joined
– 13% said they can’t imagine their lives without Facebook
– 6% have closed their account

While we remain optimistic still on Facebook’s growth, the dynamic I find most interesting is the patterns with their most mature users, those who have been on the service the longest. I’ve seen private studies with longitudinal research on Facebook users to suggest, the longer people are on the service, the less they use it. For years, Facebook’s increasing and steady average time spent by users has been driven by the massive amount of time new users spend on the service. Facebook is adding new users at such a rate the decline in usage time by their more mature users makes no material difference in this statistic. But, the question in my mind is, does this behavior of mature users signal what could someday be an issue for Facebook once they have acquired as many users as they are going to get? While possible, this is where Facebook’s family of apps come in. All Facebook cares about is that a Facebook property sucks up big chunks of your day in time spent. Facebook doesn’t care if you spend 3 minutes on Facebook, or 5 minutes on Instagram, or 30 minutes on Oculus, just so long as you’re spending time on something Facebook owns. They are putting themselves in place to be the go-to and default advertising platform for all mediums.

The implication here is Facebook must continue to either build new apps or new core experiences with their apps, and/or buy other apps where people are spending large amounts of time. This is why it makes sense for Twitter and Snapchat to join the Facebook family. Each service would add the value of incremental time to Facebook’s metrics and is another outlet for Facebook to place advertising inventory. This market is a winner take most, and the sooner Twitter and Snapchat realize that, the better. They belong in Facebook’s conglomerate.

Facebook’s Opportunity

Facebook reported earnings for the first quarter of 2016 on Wednesday and they bucked what’s been the trend for most of the other big tech companies so far this earnings season. The company reported phenomenal year on year growth in users, revenues, and profits, and seems to be facing few of the headwinds its competitors and counterparts are facing. Why is Facebook doing so well and where does it go from here?

Note: the charts in this post come from the quarterly deck I do on Facebook (as well as lots of other tech companies) as part of my Quarterly Decks service, which you can read more about here.

User growth multiplied by ARPU growth

Facebook reported yet another quarter of strong user growth, in marked contrast to Twitter, which has been eking out only very modest growth recently. In fact over the past year, Facebook’s monthly active users grew by roughly two-thirds the size of Twitter’s entire base. This wasn’t a one-off – Facebook has grown by over 150 million users year on year for the past four years at least and growth has actually accelerated recently:Facebook y on y MAU growth

Predictably, the strongest growth has been in the least mature markets – Asia and Facebook’s “Rest of World” geographic segments lead the charge, with over 75 million new users each over the past year, while North America and Europe added fewer users (but still grew decently). That reemphasizes the importance of Facebook’s efforts to grow usage in those emerging markets and, hence, projects like Free Basics (recently shut down in India) and its other connectivity projects. So far, though, it seems to be doing just fine in these countries.

Average revenue per user is also growing strongly across the board, led by the US and Canada, where annual ARPU is approaching $50. Other regions have far lower ARPU – the rest of the world combined has an annual ARPU of just $7:

Screenshot 2016-04-27 17.22.34

That overall ARPU growth multiplied by the user growth is driving phenomenal overall revenue growth. And, because that growth requires a much more modest increase in costs, it’s also driving margin expansion. Revenue grew by 52% year on year for the second quarter in a row and operating margin was up 11 points year on year. Just as a reminder, that revenue is almost all coming from ads at this point – the FarmVille era is well and truly over at this point and payments are a tiny fraction of total revenues for Facebook today.

The mysterious role of Instagram

One of the hardest things to get at in Facebook’s results is the role of Instagram. The app has been serving up ads for some time now and management has been talking up the benefits in general terms for several quarters. But it doesn’t break out metrics other than monthly active users (400 million at last count). In addition, Instagram users are excluded from the MAU count Facebook reports and on which it bases its ARPU calculations, even though Instagram revenue is included in ARPU. As such, there’s a little misdirection going on in that Facebook is including Instagram in the numerator but not the denominator here. There is, to be sure, a good chance many Instagram users are also Facebook users and so there’s not too much double counting, but I do wonder how much of the growth in ARPU is from Facebook monetizing Instagram better and how much comes from the core product. It almost certainly has a bigger impact on those rapidly growing (and large) US and Canada numbers than in other regions, but it’s likely being felt in additional markets too.

The Messenger and WhatsApp opportunity

That all brings us to the next point. Facebook hasn’t officially turned on the monetization spigot for Messenger or WhatsApp yet. At the F8 developer conference recently, Facebook outlined its vision for turning these additional products (including Instagram) from products into platforms by opening up functionality to developers. But there’s also an opportunity to turn them into real moneymakers as it already has both the core Facebook experience and Instagram. It’s being very careful about that, as it was with the initial mobile ad experience on Facebook itself. Facebook has the additional challenge of getting around WhatsApp founder Jan Koum’s antipathy to ads, but I’ve no doubt it will find a way in time. I’ve written previously about some of the ways it looks to do this and I think these opportunities have significant additional growth potential if they’re done right. Given how fast Facebook is already growing, once it decides to make the switch on monetizing these products, things might even accelerate, which is an impressive feat given the scale of the business. Those opportunities are largely tied up in increasing the ways in which businesses communicate with their customers, and ultimately building a commerce platform. That could raise ARPU even further, both for the core Facebook experience and for these offshoots and acquired platforms.

Headwinds

As I mentioned, Facebook’s competitors and counterparts among big tech companies seem to be facing a variety of headwinds – Apple, the increasing maturity of its three major product lines; Google, the shift to mobile ads (where Facebook is a much bigger force); Microsoft, the decline of the PC and traditional productivity software; Amazon, the challenge of competing in new markets like China and India where local competitors are much stronger, and so on. All of the above are also dealing with currency movements which devalue the contributions of their non-US businesses. What headwinds does Facebook face? The recent stories about a decline in the more personal sort of content sharing and the more persistent stories about teenagers moving to other platforms are certainly candidates. Neither of these appears to be denting Facebook’s growth so far – engagement seems to be rising, not falling and, though users may be sharing less personal material, they’re sharing more of other types of content including video and news articles, which make for better ad material anyway. But these are perennial threats and can’t be entirely dismissed.

From a perspective of internal threats to success, Facebook is placing some biggish bets on future projects like virtual reality (through Oculus) and research and development into new forms of connectivity, both projects outside its core business and therefore both potential distractions and financial sinkholes. But the scope of these efforts seems to be modest in the context of Facebook’s overall business and its margins aren’t suffering yet. Government action on Free Basics, as we’ve already seen in India, is another possible threat, but a modest one at this point and one few other governments seem willing to take on for now.

Perhaps the biggest threat of all is that platform owners like Apple and Google end up owning the next round of devices and platforms in the same way they have smartphones despite Facebook’s VR investments, and steadily squeeze out third parties they perceive as a threat. Facebook seems aware of this possibility and has invested not just in VR as a potential future interface but also an increasingly OS-like presence on smartphones but, so far, this is a theoretical threat at most and Facebook continues to dominate the time people actually spend on their phones. For now at least, Facebook looks set to continue to buck the market trend.

The Apple Watch Keeps My iPhone Addiction Under Control

Over the past year, many people, on noticing the Apple Watch on my wrist, could not help themselves but ask, “So, how do you like your Apple Watch?” After a short pause, my answer has always been, “I like it!” So, one year into the launch of the device seems like a good time to explain my hesitation in answering the question and see if what I was happy with a week in still makes me happy today.

First off, let me explain my hesitation in answering this very straightforward question. It has nothing to do with not liking the Apple Watch. My hesitation comes from the fact that articulating why I like it to someone who has not tried it is not easy. What I usually end up saying is I like it because it helps me keep my phone addiction under control.

While the capabilities of Apple Watch are the same for everyone who has one, the hook it provides for users can be quite different. This is why it is hard to articulate why you like it in a way that speaks to other people.

My hook has certainly been notifications. The reliable nature of the notifications provided by Apple Watch allows me to have my phone on silent all the time because I know I will always see that tweet or email or text that matters to me the most. While allowing me to be in control, notifications also prevent me from getting sucked into email or Twitter. I can see what is happening and I have to make a conscious decision to reach for the phone to reply or interact which, in turn, forces me to judge whether something is urgent enough to interrupt what I am actually doing at the time.

Overall, I feel the Apple Watch lets me be more in the moment. Prior to the Watch, you would have never seen me without my phone on my desk or on the restaurant table, screen up of course! Now, I can happily leave my phone in my bag without fear of hyperventilating. The pre-populated answers you use to reply to a message, as well as voice dictation, are useful ways to do quick triage when waiting is not an option. Even those help to limit my engagement as they encourage a quick and to the point interaction. The bottom line is, I don’t think the Watch is about active engagement the same as it is for the phone. This does not make it less valuable. To the contrary. Having something that delivers, in an immediate and easy manner, what you need without a prompt from you is valuable and convenient.

The Activity App is not the Drill Sergeant I need

The Watch complications also help with my obsession of being in control, especially as I have my activity app circles on. While I quickly got bored with the gamification aspect of the activity app and I no longer check what badges I’ve earned, I still check my daily activity and progress. As I want to get in shape, I wish the activity app offered more than the current level of suggestions for the following week. I certainly would like to be pushed harder vs. asked to settle for a lower goal after missing one. In other words, I wished the Apple Watch were more like a drill sergeant than a supportive mother providing steps on how to achieve my target rather than making me settle for less.

More context, please!

Over time, I have noticed I am not as strict as I was at the beginning with my standing and I blame the fact that the Watch is not always precise at capturing my stands. The lack of context negatively impacts my dependence on the stand reminders. If Apple Watch knows I am doing 65 miles per hour, chances are I am in a situation where standing up and moving around is not an option. If Apple Watch knows I am in a meeting because my calendar says so, I might not want to be reminded to get up. While I can mute reminders for a period of time, I wish there was some degree of automation to start with even if this requires an initial setup.

This increased context added to the more coach-like experience I am hoping for would turn the Watch, and Siri with it, more into an assistant deepening the relationship with the Watch.

Lack of compelling apps shows the current lack of devs interest

Aside from notifications, there are no killer apps I have found. After one year, I am still waiting for someone to make a decent sleeping app but I realize I might have to wait until the next generation Apple Watch when the Watch may have more sensors which will circumvent users having to enter when they go to sleep and get up. Overall, I feel developers are really not putting much effort into thinking about the Watch in a unique way. For me, the Watch is certainly not a duplication of my iPhone. The Watch is all about convenience and ease of use. Apple Pay probably best reflects what I mean. While I could do Apple Pay on my iPhone, it was not until I got it on the Apple Watch I became a regular user.

Developers seem to be waiting for more sensors and more processing power on the Watch. I am not sure if they are necessarily waiting for cellular though. I know I am not in a hurry for that particular feature if it means a compromise on battery life which right now serves me perfectly. While Apple Watch has helped my phone addiction I am not quite ready to leave my phone behind, but that is just me!

From controlling one addiction to becoming one

I like my Apple Watch and I would not go without it but I know I want more so I can love it. I want more so I can be addicted to it in the same way I have been addicted to my phone for so long. Only such a shift will make sure wearables avoid the same issues tablets experienced as they struggled to become a must-have for the masses.

IFA’s China CE Show and the Chinese Push for VR

I was in Shenzhen, China last week at the first CE China trade show, produced by IFA, the German company that also produces the giant IFA trade show in Berlin each September. Shenzhen, which has a population of over 10 million people, is about an hour’s drive from Hong Kong and just over the border in mainland China.The city is best known as the place that Foxconn and other factories build consumer products, including Apple’s iPhone and iPad and is often called the “Silicon Valley” of China.

I wanted to attend the IFA CE Show specifically to see how the Chinese were going to apply their manufacturing magic to VR headsets and if they could both bring prices down and new VR headsets out that had broader appeal to mass consumer audiences anytime soon. What I found is the Chinese have really gone to town on making better Google Cardboard headsets which you can put your smartphone into and use it to power VR games, videos and images.

Most are made of solid plastic, have simple optics and cost anywhere from $23.95 to $129.95 depending on the quality of the optics in the goggles. The main name brands are the Samsung Gear and Zeiss VR One along with dozens of lower end models too. You can find many of them on Amazon and have them shipped to you directly from Shenzhen.

While that’s nice and it does allow people to use a better Google Cardboard concept as training wheels for VR, I was most interested to see if the Chinese manufacturers could help get the prices down on the more expensive headsets such as the ones from Facebook’s Oculus and HTC’s Vive. The Chinese manufacturers are well known to copy what they think will be big selling products and create similar models at cheaper prices when possible.

The Oculus today costs $599 and uses a PC plus a $300 graphics card. The HTC Vive is $899 and also needs a PC with an expensive graphics card to handle the rendering of the VR content.

As I expected, the Chinese manufacturers are hard at work creating similar headsets at cheaper prices. Behind the scenes, I was made aware of at least three VR headsets very much like the ones from Oculus and HTC that could be brought to market at least $200 to $300 cheaper. However, at the moment they too need a PC and graphics card to run them. I am told these lower cost versions of Oculus and HTC could be out for the next holiday season. However, it is unclear if they will be able to run Oculus’ VR content or Steam Engine’s VR content that works on the HTC Vive when shipped.

But the Chinese are not content with just creating cheaper versions of today’s high-end VR headsets. They want to innovate in this space and create VR glasses that look more like a set of actual glasses. One such product I saw at the IFA China CE show came from a company called Dlodlo (pronounced dodo).

I could not get any specific specs from Dlodlo executives abut the VR glasses and, given what I know about how much technology goes into VR headsets, I am skeptical Dlodlo will get this to market this year or even next.

Indeed, Mark Zuckerberg recently told Facebook developers that a VR headset design that looked more like glasses would be at least 10 years away. There needs to be many breakthroughs in moldable batteries and in making the chips required more adaptable, flexible, and powerful before we see the glasses of Zuckerberg’s vision.

Yet, the fact the Chinese are already being very aggressive in creating VR headsets that look more like traditional glasses has to be looked at seriously. This design is the Holy Grail of VR glasses and revolutionary work by Dlodlo and other Chinese manufacturers could push all headset vendors in this direction and possibly get less obnoxious and socially acceptable VR headsets that are more natural looking into the market within 3-5 years instead of 10 years out that Zuckerberg thinks will happen.

In fact, in discussions with many Chinese manufacturers at the IFA China CE Show, it became clear the Chinese want to deliver the technological breakthroughs needed to create VR headsets that are more like glasses and relatively inexpensive as soon as possible. Although this may take a few more years to achieve, the Chinese are in a place to be a major influence on what will be the VR glasses that eventually gain broad market acceptance. Even though some of these VR glasses may actually be designed outside of China, the innovation around manufacturing them and what they are learning now in trying to create VR glasses of their own designs will go a long way to getting socially acceptable VR glasses into the market much sooner that Mark Zuckerberg envisions.

What We’re Learning from Smartwatch Adoption

A year ago today, Apple released its long-anticipated Apple Watch. Over the ensuing year, we’ve learned a lot about an entirely new tech category.

The Consumer Technology Association (CTA) estimates 17 million smartwatches were sold in 2015, up from 4 million in 2014. I can name more than a few categories that would love to experience 350 percent growth over a 12 month period. Few ever do.

Forthcoming CTA research suggests roughly eight percent of households own a smartwatch today – almost double the number last year. Of those planning to buy a smartwatch in 2016, 72 percent will be first-time buyers.

Smartwatches remains a nascent category. The majority of those who will own a smartwatch do not yet currently own one. The use case scenarios for this device and similar ones have yet to be defined. Smartwatches are trying to do something few tech categories aspire to. Through smartwatches, we are embedding the internet into new pockets of our everyday lives. This bigger and broader transformation will redefine the boundaries of connectivity.

One of the greatest struggles for a new experiential category are the demands for instant, wide-spread adoption. Take a step back. The most successful categories, in the long term, are ones that redefine how we do things. They redefine leisure and productivity, and ultimately redefine who we are.

In 1984, the VCR avoided being outlawed by the Supreme Court by a single vote. At the time, critics were overly concerned about the record button, but it was the play button that redefined us. By the 1990s, we were spending more on video rentals than at the box office. The device gave way to an entirely new sector of the economy and an entirely new way of life. Today, streaming services are once again redefining leisure.

Categories are quickly panned when mature use case scenarios aren’t easily and instantly identified. The smartphone was introduced in 2003 and the first iPhone came to market in 2007. The smartphone has changed how we do numerous activities – from navigating traffic to shopping to listening to music. All of these activities are far afield from the original premise of a portable telephony contraption.

No one saw smartphones for the mini-computers we shove in our pockets today and no one foresaw how apps would change the way we approached the internet on these devices. In the early days of the smartphone, the internet was a browser technology, akin to the way we experience it on the computer. But the introduction of apps would redefine how we leveraged the internet to disseminate information, data and services and, as a result, myriad completely new services were born.

The smartwatch isn’t simply the next new shiny gadget – it is something radically more. At least, the potential it represents is something more. Nothing in the past year suggests that potential has diminished.

Many of us are thinking too narrowly about smartwatches. We focus on aesthetics and design. We focus on all of the electronics that power these small wonders of innovation strapped to our wrists. But we don’t stop to consider what we are really asking of the device. Or perhaps more importantly, we aren’t talking about what the smartwatch is asking of us.

What we should fundamentally be asking about the smartwatch is this: if the internet makes sense on the wrist, what does that mean for society?

At its most fundamental level, the smartwatch represents a sea change in how we connect. We are driving computing sensors and the internet into new areas of our lives. Never before have all of these building blocks been available to us as they are today. It wasn’t feasible to deliver the internet to the wrist until now.

Academic research suggests it takes five to seven years to unleash the productivity-enhancing characteristics of new innovation. Let us look beyond the obvious. We aren’t one or two years into a brand new category; we are one or two years into a brand new way of thinking about the internet. What we learn from this early experimentation will help color and characterize where the internet goes from here.

A few years from now, the smartwatch as we know it today may take an entirely new form. But what we learn will define where the internet goes next and how we get it there.

Apple Watch at One: What Next?

This week, the Apple Watch will celebrate its first anniversary. If the rumors are correct, we should expect a new Apple Watch this fall, most likely in September. Given this, it’s worth thinking about the Apple Watch as it is today and what should happen when Apple updates the device in a few months.

Vision versus Reality

Apple’s vision for the Apple Watch was a little like the original iPhone strategy, in that it had three parts. The Apple Watch was to be:

  • the most advanced timepiece ever created
  • a revolutionary way to connect to others
  • a comprehensive health and fitness companion

In addition, Apple made much of the apps that would be available for the device and showed several on stage.

In reality, while the Apple Watch turns out to be very good as a fitness device, the rest of the vision articulated a year ago has turned out to be a little off the mark. From my personal use and from survey data from Wristly and others, it would appear people do indeed use the Apple Watch as a fitness companion but the main value comes from notifications and the watch face complications. Apps have largely been a disappointment, even since watchOS 2 launched last fall, because they simply load too slowly and inconsistently to be useful. In my experience, the communication aspects – notably Digital Touch – have been a novelty that quickly wore off. I do use the Apple Watch for communication but I don’t use any of the Watch-specific features – Messages is one of several apps where I use actionable notifications (also available on iOS) rather than the app itself to communicate.

I use the fitness features regularly – I wasn’t in the market for a fitness tracker as such, but having a regular reminder of my progress against goals right on my wrist, which I now consult dozens of times a day, has been very motivating. I don’t use any third party apps for fitness tracking most of the time, but I find the three rings plenty to keep track of how I’m doing, try a little harder if I need to, and so on. I find the Modular watch face with its many complications invaluable at this point for getting a quick sense of what’s going on in my life. I’ve worn watches all day every day since I was about six, so it is natural to look at my wrist regularly throughout the day to check the time. Now I get more value out of that glance because I also see what’s next on my calendar, what the weather is, how I’m doing with my fitness goals, and so on. The watch face provides the true “Glance” functionality for me, rather than the feature actually called that, which I never use. Notifications – which I carefully tuned when I first got the Watch – are the other essential part of its functionality for me. I get a lot of notifications, and it’s great to be able to see these on my wrist, dismiss some, respond to others, make a mental note to deal with others later, and switch to another device to deal with the rest. I love that my notifications no longer have to beep or vibrate in a way that’s noticeable to others, and are much more personal than in the past.

All of this means the Apple Watch for me largely fills the same functions as most other early smartwatches, though it does them considerably better. This takes me back to a report I wrote in the months before the Apple Watch launched which said that as long as smartwatches mostly focused on notifications and fitness they’d find a limited market, since most people don’t care about those things. It’s better than other smartwatches, but it’s not as different in the jobs it does as Apple’s vision for it would have suggested.

Apple Watch 2

All this raises the question of what Apple might do with the second version of the Watch to address these issues and expand the addressable market for the Watch. The most obvious thing would be boosting the performance of the Watch significantly, allowing apps to perform more effectively. That means boosting the processor on the Watch such that it can handle these tasks natively with a much quicker response time. It likely also means giving the Watch more network autonomy. Apple likely isn’t ready to add an LTE chip to the Watch, but having its own connectivity would make a big difference in the responsiveness. At least being able to connect to WiFi when available independent of the iPhone would help here, though it wouldn’t help when out and about. I suspect if Apple gave the Watch a big enough spec bump, apps would become truly useful and it, in turn, would spark innovation in that area. The initial batch of Watch apps were largely ineffective because of a combination of lack of vision and poor performance. Apple needs to fix both problems and that likely starts with the hardware.

Another area where Apple could extend the functionality of the Watch is by adding more sensors. Independent GPS is an obvious one, and it would fix one of the few big holes from a fitness tracking perspective. But it could also be extended with a variety of body sensors, which could provide additional information for health and fitness apps. This could be done in the Watch itself (that’s certainly where the GPS needs to reside), but it could also be done through either Apple-made or third-party Watch bands which could be specialized for particular needs. This could be one way to address the regulatory challenges associated with moving deeper into the healthcare sphere, which I alluded to last week. Opening up a Band connector to third parties could enable this innovation without Apple having to deal with the FDA and equivalent bodies itself.

From the reporting I’ve seen, it seems more likely Apple will address the latter of these two needs than the former, and I think that’s a shame. The market for the Watch can only really expand beyond its current scope if the Watch does meaningfully more. That means unleashing the thousands of apps that could come to the device if the hardware really supported them. I love my Apple Watch but it still feels like I’m using a subset of what could be a much fuller set of features if Apple were to really move its performance forward in the fall.

Apple Shouldn’t Cross That Road Till They Come To It

Part 1: Argument

On April 19, 2016, Ben Thompson of Stretechery wrote: Apple’s Organizational Crossroads. If you haven’t read it already, I highly recommend you read it now. In a nutshell, Ben Thompson’s contention is:

1) Apple employs a (rarely used) functional organizational structure; ((“(T)he very structure of Apple the organization — the way all those workers align to create those products that drive those exceptional results — is distinct from nearly all its large company peers.” ~ Ben Thompson))

2) This organizational structure has served Apple well;

3) However, Apple is moving toward Services;

4) To do both iPhones and Services well, Apple needs to move from a functional organization to a divisional organization; and

5) Ben Thompson is not sure if Apple can successfully make the transition.

It’s very difficult for a publicly traded company to switch,” Bezos said. “So, if you’ve been holding a rock concert, and you want to hold a ballet, that transition is going to be difficult. ~ Jeff Bezos

This is a very rudimentary outline of Ben Thompson’s position. Again, READ THE ORIGINAL ARTICLE.

Divisional Organization

Most companies, especially large companies, work off a divisional structure. If Apple were divisional, they would have divisions for the iPhone, iPad, Apple Watch, iPod, etc.

Divisional organization has many advantages — which is why almost every company, save Apple, uses it. However, one of its disadvantages is difficulty in letting go of the old and transitioning to the new. Why? Because each division is a self-contained fiefdom and division mangers — and those in their charge — are highly incentivized to protect their fiefdom’s interests, even if it means putting their company’s overall interests at risk.

[Of course, no one consciously tries to harm their own company but, as Upton Sinclair said, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”]

Most discussions of decision making assume that only senior executives make decisions or that only senior executives’ decisions matter. This is a dangerous mistake. ~ Peter Drucker

Microsoft, for example, had Windows which was a cash cow that rightfully dominated the company for ~30 years. However, the Windows division was so large and so powerful that it became the end rather than the means. In other words, the company existed to support Windows rather than the other way around.

The worst enemy of major consumer electronics companies is not suddenly weakening sales, which sometimes shake firms out of their stupor. It’s that last, big, almost obsolete blockbuster that gives executives a reason to avoid change. ~ Tero Kuittinen

Any internal efforts to replace or displace Windows were quickly squashed. Microsoft worked very, very hard to diversify in the 2000s, but everything had to be made compatible with Windows. And so, a series of ambitious experiments with televisions, gaming, computing in living rooms, tablets, phones, etc. all had their reach constricted because they were tethered to Windows. They had no chance of becoming the next big thing because they were tasked with supporting the current big thing.

The worst place to develop a new business model is from within your existing business model. ~ Clayton Christensen on Twitter

Idle observation: Apple is basically working though failed Microsoft products from 15 years ago and doing them right. ~ @BenedictEvans

Functional Organization

Apple, on the other hand, is famous for taking an approach that is the polar opposite to Microsoft’s. Instead of protecting their money makers from internal competition, Apple is a serial cannibalizer of their own products.

The Macintosh devoured the Apple II, the iPhone consumed the iPod. The iPad even competed with the MacBook and the recent MacBooks have returned the favor.

If you don’t cannibalize yourself, someone else will. ~ Steve Jobs

If anybody is going to cannibalize us, I want it to be us. I don’t want it to be a competitor. ~ Steve Jobs

Apple’s more expensive, higher-margin device continues to cannibalize the cheaper, lower-margin one. ~ Benedict Evans on Twitter

Can’t accuse Apple of not cannibalizing itself. Strongest competitors to the iPad are the iPhone 6+ and the new Macbook. ~ Benedict Evans on Twitter

How does Apple manage to transition from money-maker to money-maker so (seemingly) seamlessly? They are set up with a functional, rather than a divisional, structure. Those functions consist of marketing, engineering, finance, etc.

Whereas divisions are incentivized to compete with one another and disincentivized to cooperate with one another, a functional organization has no incentive to compete (or actively sabotage) the efforts of others and every incentive to coordinate their efforts within the company. Engineers within Apple can work on the iPhone or the iPad or the Apple Watch and know that they are helping the company as a whole.

However, being a functional organization is no panacea either (else everyone would do it). The very strength of a functional organization is also its weakness. While it’s true that the employees of a functional organization aren’t incentivized to compete with other parts of the organization, they are also not incentivized to compete at all, i.e, there are few external incentives to motivate employees to benefit the corporation as a whole. How do you motivate employees who are not directly benefitting from a product’s success?

Call it what you will, incentives are what get people to work harder. ~ Nikita Khruschev ((Excerpt From: Steven D. Price. “1001 Smartest Things Ever Said.” iBooks.))

Further, functional organizations lack accountability. When a division fouls up, you know who to blame. When the iPhone fouls up, who exactly do you blame? Who do you hold accountable? Marketing? Engineering? Finance? They may all bear part of the responsibility, but none of them bear all of the responsibility.

Dysfunctional Services

Ben Thompson persuasively argues that:

1) Apple is bad at Services. For example, all of the following services could be much better than they are.

— App Store search
— Apple Music
— Cloud Services
— Apple Pay
— iMessage
— Siri

2) Apple is moving towards being a Services company;

3) The functional organization that makes it possible for Apple to create superior hardware is the same organizational structure that makes it impossible for Apple to create superior services.

I will not repeat Ben Thompson’s arguments in support of this thesis because 1) He makes them so much better than I can; and 2) for the third time, you should READ THE ORIGINAL ARTICLE.

Never, ever, think about something else when you should be thinking about the power of incentives. ~ Charlie Munger

Ben Thompson’s entire article is a great but, in my opinion, his spot on explanation of why Apple’s services can never hope to match their hardware prowess under Apple’s current organizational structure is the crème de la crème. Truly outstanding analysis. (Have I mentioned that you should really read the his original article?)

Your current business model limits your strategic options because that’s what business models do. ~ Saul Kaplan (@skap5)

Part 2: Rebuttal

DuPont Analogy

Ben Thompson points to DuPont as having been in an analogous situation to the one Apple is in now. DuPont made gunpowder. They decided to diversify into paint because the processes for creating gunpowder and paint were surprisingly similar. However, upon entering the paint business, DuPont discovered that the very business model that made the sale of gunpowder so successful was also the very business model that made the sale of paint so unsuccessful. In other words, the processes for making gunpowder and paint were similar but the business models for successfully promoting them were were dissimilar. So what to do?

DuPont solved the problem by breaking the company into two different divisions, with two different business models, tailored to create and promote two different products. Ben Thompson argues that the iPhone is DuPont’s gunpowder, Services are DuPont’s paint and Apple — like DuPont before it — needs to break these two very distinct business products into two very distinct business divisions.

The problem for Apple is that while iPhones may be gunpowder — the growth was certainly explosive! — services are paint. And, just as Dupont learned that having a similar manufacturing process did not lead to similar business model, the evidence is quite clear in my mind that having iPhone customers does not mean Apple is necessarily well-equipped to offer those customers compelling services. At least not yet. ~ Ben Thompson

I’m not convinced that DuPont is analogous to Apple. DuPont added paint to its gunpowder lineup because it wanted to diversify. But Apple doesn’t diversify. They cannibalize.

— When Apple created the Macintosh, they weren’t diversifying from the Apple II.

— When Apple created the iPhone, they weren’t diversifying from the iPod.

— When Apple creates whatever-the-heck-they-create-after-the-iPhone, they won’t be diversifying from the iPhone.

The Macintosh and the iPhone weren’t diversifying products, they were SUCCESSOR products.

Solution Or Strategy Tax?

Ben Thompson proposes the following solution to Apple’s services problem:

The solution to all these problems — and the key to Apple actually delivering on its services vision — is to start with the question of accountability and work backwards: Apple’s services need to be separated from the devices that are core to the company, and the managers of those services need to be held accountable via dollars and cents.

It’s true that if you broke Apple into an iPhone division and a Services division, each division would be incentivized to follow the path that best served their respective purposes. But it’s also true that neither division would be incentivized to work with each other or for the company as a whole.

Currently, Apple’s Services exist to serve the iPhone. Services may be huge, they may make gobs of money, but they exist not to be huge and not to make gobs of money but to support the iPhone. This would not be the case if Services were broken into a separate division.

A separate Services division would inevitably compete AGAINST THE INTERESTS of the iPhone instead of supporting it. Why? Because that’s what divisions do. Unlike DuPont — whose two different product lines could both simultaneously strive for success without competing against one another — the iPhone and Apple’s Services are inextricably intertwined.

Further, if Apple used divisions — just like everyone else — Apple would become — just like everyone else. Apple would lose one of the attributes that makes it unique and uniquely successful.

My Frustrating Conclusion

Ben Thompson is right to say that Apple’s hardware focused business model makes it impossible for Apple to excel at Services. In other words, Apple’s services will always be “meh” because Apple’s business model is tailored to create hardware on a periodic timetable and services require one to focus on, and build up expertise in, an entirely different set of iterative processes. However, I think the proposed solution — breaking the iPhone and Services into seperate divisions — is a cure that would be worse than the disease. Breaking Apple into two divisions would not create one excellent hardware division and one excellent Services division — it would, instead, create one conflicted and dysfunctional company.

As frustrating as this may be, I think Apple should continue to be so-so at Services so that it may continue to be so, so great at hardware.

Part 3: Counterargument

Self-Disruption vs. Self-Cannibilazation

Industry observers ((Not Ben Thompson. I’m speaking in generalities.)) often say Apple disrupts itself.

It doesn’t.

Disruption is not about products, it’s about business models.

Products don’t get disrupted, businesses (and people) do. ~ Horace Dediu (@asymco) 12/6/14

Disruption is about business models, not technology. ~ Ben Thompson (@monkbent)

Incumbents are rarely disrupted by new technologies they can’t catch up to, but instead by new business models they can’t match. ~ Aaron Levie (@levie)

Apple doesn’t disrupt itself — that would require a new business model and Apple’s business model has remained constant throughout its 40 years of existence. Apple cannibalizes itself and obsoletes others.

— The Macintosh did not compete with the Tandy Radio Shack TRS-80, The Commodore PET, Atari, Texas Instruments TI 99/4, IBM PC, Osborne, Franklin or even the Apple II.

— The iPod/iTunes combination did not compete with existing MP3 players.

— The iPhone did not compete with feature phones.

You don’t obsolete a product by making a slightly better, or even a much better, product. You obsolete a product by making it easier to do what that product does without having to use that product. You obsolete a product by creating a wholly new category in which the old product can’t compete.

[pullquote]Apple creates new categories[/pullquote]

The Macintosh, the iPod and the iPhone weren’t better versions of existing products. They weren’t incremental improvements. They were paradigm shifts.

When a paradigm shifts, everyone goes back to zero. ~ Joel Barker

Apple doesn’t outrace their competition. They start an entirely new race and then get such a big initial lead that no one else can catch up.

Apple doesn’t create new products. They create new product categories.

Milking The iPhone

Apple does not seek to move from gunpowder to gunpowder and paint — from iPhone to iPhone and Services. They do not seek diversification. They seek succession. They seek to move from gunpowder (iPhone) to the next explosive product category. And Services are most definitely NOT the next big thing.

If I were running Apple, I would milk the Macintosh for all it’s worth and get busy on the next great thing. ~ Steve Jobs [Observation made when running NeXT]

If I were running Apple, I would milk the iPhone for all it’s worth and get busy on the next great thing. And that’s exactly what I think Apple is doing.

Apple shouldn’t divert resources toward services. It’s the other way around. Services should be directed toward supporting the iPhone — even if that means that Services will never reach their full potential.

But perhaps you’re asking, “Why should the iPhone be prioritized over Services?” Why? Because, as I wrote two weeks ago, the iPhone dominates smartphones and smartphones are the most dominate tech product of our time.

Ever think about an old friend and wonder what they’re doing right now? They’re playing on their phone. Everyone is playing on their phone. ~ Jazmasta on Twitter

Conclusion

Remember, I’m not saying Apple won’t create a Services division. I have zero knowledge of what Apple is going to do. And Ben Thompson makes a pretty compelling case for why Apple should move in that direction. But I don’t think Ben Thompson is actually arguing that it would be great if Apple created a new services division. I think he’s arguing that Apple should create a new services division if they want Services to be great. Ben Thompson is doing what good analysts do. He’s not giving us the right answer. He’s asking us the right question.

The value of analysis lies less in answering questions than it does in questioning answers.

Frankly, if Apple created a Services division, I would fear for Apple. I don’t think creating a Services division would be a sign that Apple was acknowledging the path they were already on. I think it would be a sign that they had strayed from the path and lost their way.

But I don’t think that’s going to happen. We’ve already been down this road with Apple Retail. John Browett — Apple’s fired head of Retail — wanted to make Apple Retail a bigger money maker.

John Browett seemingly made the mistake of seeing Apple Retail as something to optimise rather than cherish. Wrong. ~ Benedict Evans (@BenedictEvans) 4/21/16

John Browett didn’t understand that Apple’s goal is not to help make Apple Retail great. Rather, the goal is for Apple Retail to help make Apple great. The same holds true for Apple Services.

So many companies don’t know why they succeed. ~ Ben Thompson (@monkbent) 12/4/14

So very many companies don’t know why they succeeded. Apple always has. Ben Thompson says Apple is at a crossroads. And maybe they are. Me? I hope they stick to the straight and narrow.
images-163

The PC Industry’s Consumer Conundrum

The computer industry continues to experience tough times, as evidenced by IDC’s preliminary 1Q16 shipment numbers and Intel’s recent quarterly earnings data. Charles Arthur wrote a good column on the topic last week. I’d like to dive deeper into one of the fundamental issues facing the industry — declining consumer demand for home PCs. The fact is, fewer consumers are replacing their old desktops and notebooks in mature markets and, in emerging markets, they’re simply not buying that first PC at all, choosing tablets and smartphones instead. The result is a shrinking consumer PC installed base which will, in time, lead to a diminished demand for consumer-focused PC apps and services, creating a vicious circle. It seems the power and complexity that keeps the PC relevant for commercial use cases is now working against it with consumers. The industry seems unable to articulate a compelling reason why that should change.

IDC’s top line PC numbers tell the broad story and survey results help shed light on the details. In 2005, worldwide shipments of PCs totaled 208.8M units and consumer purchases made up about 40% of that total. Over time, an increasing number of people bought PCs to use at home for a range of tasks but more specifically to access the internet. By 2011, PC shipments worldwide peaked at 363.8M units and the consumer share grew to 54%. That year was the height of consumer PC shipments.

In 2012, the entire PC industry began to contract and the impact of smartphones and tablets on device spending started to become evident. By 2015, consumers constituted about 49% of a diminished worldwide PC shipment total of 275.8M units. IDC’s worldwide forecast numbers show consumer shipments continuing to decline in terms of shipments as well as the percentage of the total PC market throughout the five-year forecast period. And, while commercial shipment volumes will eventually stabilize, there’s currently no expectation consumer volumes will do the same.

Survey Says

A review of IDC’s massive, multi-country ConsumerScape 360 survey of digital consumers shows several clear and troubling trends when it comes to current consumer PC owners. Chief among them: fewer are using their PC for daily tasks. Over the years, we’ve polled respondents about their use of the PC for a wide range of tasks and tracked what percentage said they did particular tasks on a daily basis. For example in 2012, more than 90% of computer-owning respondents said they checked email daily on their PC. By 2015, that number had decreased to about 65%. During that same period, the percentage that used a PC daily for online search declined from 78% to 61%, reading online news slipped from 66% to 53%, and social networking dropped from 66% to 55%. The survey doesn’t illuminate the why behind these declines, but it seems pretty clear that, for many, the PC is simply overkill for the task at hand, tasks better served by simpler devices such as smartphones and tablets. When we asked the same task use questions of tablet and smartphone users, as you might expect, the percentages using them daily went up over the same period.

Equally troubling in that same large survey, when we asked current computer users in 2015 about their future PC-purchase plans, a sobering 65% said they had no plans to buy a new PC in the future. When we asked respondents who currently owned a tablet, smartphone, or television (but not a PC) about their PC purchase plans, 69% said they had no intention of buying one. (Note: IDC is currently fielding the 2016 version of the survey.)

Industry Ramifications

Many of the top PC and component vendors in the industry like to point to the large installed base of aging PCs and suggest it is just a matter of time before their owners replace them with new ones. While this perspective likely still applies to much of the commercial installed base, it seems increasingly clear it’s not true when it comes to the consumer installed base. In highly penetrated PC markets such as the United States, there was a time when it wasn’t unusual for a home to have multiple PCs. It seems unlikely this will be the norm going forward. The best case scenario is likely to be that over time, as these multiple PCs age out, many households will buy a single new PC to continue to do the decreasing number of tasks to which tablets and smartphones remain ill-suited. An increasing number of these replacement devices will be detachable products such as Microsoft’s Surface Pro, a category IDC currently counts within its tablet numbers.

While the overall PC market is contracting, the weakness in consumer demand is having a dramatic impact on those vendors that traditionally drive significant consumer shipment volumes. As widely noted, Apple has been the exception here. It is is fair to argue that few within the PC industry, save Microsoft itself, is in a position to replicate the tightly coupled hardware, OS, and services offerings that Apple has put together on the Mac. An equally important aspect of the Mac’s success is the growth of the iPhone and the company’s ability to drive a better experience for those consumers who buy both. This is an area other hardware vendors who play in both the PC and smartphone markets clearly need to explore but their dependence on Microsoft and Google for the underlying operating system will make this exceedingly difficult to replicate.

In the end, it is hard to envision a broad consumer PC rebound. Microsoft has served up a good operating system in Windows 10 and hardware vendors are creating compelling products. But a PC is still a PC, with all the good and bad that entails. It seems many consumers have simply moved on.

Google’s EU Antitrust Battle

In a less than surprising move, the EU has declared Google in breach of antitrust regulations and has abused its dominance. There are three major arcs to their claims:

In today’s Statement of Objections, the Commission alleges that Google has breached EU antitrust rules by:

  1. requiring manufacturers to pre-install Google Search and Google’s Chrome browser and requiring them to set Google Search as default search service on their devices, as a condition to license certain Google proprietary apps;
  2. preventing manufacturers from selling smart mobile devices running on competing operating systems based on the Android open source code;

  3. giving financial incentives to manufacturers and mobile network operators on condition that they exclusively pre-install Google Search on their devices.

The Commission believes that these business practices may lead to a further consolidation of the dominant position of Google Search in general internet search services. It is also concerned that these practices affect the ability of competing mobile browsers to compete with Google Chrome, and that they hinder the development of operating systems based on the Android open source code and the opportunities they would offer for the development of new apps and services.

In the Commission’s preliminary view, this conduct ultimately harms consumers because they are not given as wide a choice as possible and because it stifles innovation.

The emphasis of this case seems to be more about the search dominance than anything else. As I digested this yesterday, I thought about all the angles I could take with this case. I can certainly sympathize with the EU’s position. They have a point that Google Search being the default search solution stifles any chance for a competing search product. Power users certainly have the choice to change their default search engine to something else and I’m sure some do. However, the issue here is the choice made by the OEM. One could reasonably claim that, if an OEM loaded DuckDuckGo or Yahoo as their default search engine, the consumer experience would suffer, because in reality Google Search is dominant because it is the best. Similarly, the claim that Chrome must be pre-installed as well on Google-approved Android devices. OEMs still ship a different browser in many cases to Chrome and put it on the home screen while they hide Chrome in a folder or buried in a sub-menu. Again, Chrome is the better browser and even without it being on the home screen of many modern day Android devices shipped, Google reported they passed 1 billion active users of the mobile web browser. Seems consumers are playing a role in choosing as well. I’d have an easier time accepting this claim if Google forced the OEM to load the Chrome browser into the default dock but they don’t.

It also seems the EU has turned a blind eye to the existence of Cyanogen, which comes quite close to providing an entire counter case to each of the EU’s claims. Google allows Cyanogen to provide a flavor of AOSP to customers with a host of bundled services. You can ship a version of Cyanogen that only comes with the bare basics of Google services like Play Store and Search and provide all the other solutions yourself. Cyanogen allows a vendor to pass only device level certification meaning said brand can also sell generic non-Google Android AOSP code, something the EU says is not possible when, in fact, it is. They claim once a vendor ships a Google certified smartphone they can no longer ship an AOSP version yet brands who are Google certified do this very thing in China every day.

The main point I landed on as I reflected on this is, even if the EU wins and Google provides even more flexibility than they already do, I honestly don’t think anything would change. This is the main point. Even if someone could bundle an alternate search engine, I doubt most would except for financial benefit which again could hurt the user experience. I don’t think someone is going to create a new startup focusing on search just because now they have an opportunity to be bundled on Android (this is the innovation claim they would). I firmly believe, even if the EU wins and Google changes their tactics, most OEMs would keep doing exactly what they do today.

While there isn’t much strength either in Google’s rebuttal blog post, as the examples they use don’t really help their position, it at least gives some updated terminology on the open yet controlled/managed dynamic of Android.

I understand why Google has the process they do in place to make sure their approved partner devices are capable of certain hardware requirements which is capable of running certain software experiences in order to provide as good of an Android experience as possible. I also sympathize with the OEMs who need to make money while still passing Google’s certification requirements. While I’m more on the side of Google than the EU here, I still believe they need to make more structural changes to Android to provide partners with more revenue opportunities.

Technology’s Relationship with Government is Complicated

This week, the European Commission finally issued its preliminary findings in its investigation of Google’s Android operating system. Also this week, Apple’s general counsel Bruce Sewell testified in front of a United States congressional committee looking into issues around encryption and security. And last week, Microsoft filed a lawsuit against the US Department of Justice concerning warrants for email data. These three news events are part of what’s sure to be a growing wave of interactions between major technology companies and governments around the world and a relationship between tech companies, governments, and regulators that’s going to get a lot more complicated in the years to come.

Encryption and Security the Tip of the Iceberg

The current battles between Apple and the FBI and Microsoft and the DoJ are just the tip of the iceberg here, but they are reflective of the ways in which technology has become not just part of our lives but central to them. Our most personal communications are now digital and are stored and pass through our personal and business devices and leave digital traces to an extent unprecedented in history. Warrants, which in the past would have been issued to individual or corporate suspects for reams of paper, are now issued to big tech companies for digital records. Because of the nature of digital record keeping, these tech companies are now involved in these legal proceedings in a way that’s fundamentally new and requires new ways of working together with law enforcement. Requests by law enforcement are going to become more and more invasive (and potentially secretive, since obtaining a digital record doesn’t deprive the owner of their copy and can therefore be done without their knowledge) and, at some point, society needs to establish new rules for these searches.

Expansion of the Role of Technology Will Broaden Brushes with Government

However, as technology companies seek to broaden the scope of their products and services into new areas, many of the new activities they’ll engage in will bring them into more frequent contact with governments and regulators around the world, in ways that have nothing to do with law enforcement. Consider two members of the vanguard of “sharing economy” companies – Uber and Airbnb. Both have had frequent brushes with regulators, sometimes being prevented from operating entirely in new markets, sometimes being forced to make concessions or charge additional fees to their customers in order to continue operating. Both companies have also often taken an “easier to ask forgiveness than permission” approach to launching in new markets, relying on pressure from avid customers to push through permission to operate over the objections of powerful lobbies and regulators.

That approach may work for some companies in some industries, but it’s unlikely to be the model to follow in many other markets tech companies want to expand in to. For example, pushing into financial markets like payments and banking in the same way would certainly not work – those markets are highly regulated and the punishments for flouting regulations can be severe. Further, it’s one thing for a startup with no reputation to lose to engage in this cavalier behavior, but quite another for an established company with a brand to maintain to do the same.

Significant Changes are Coming to Product Development

As technology companies move deeper into healthcare, education, payments, and other areas, they will find themselves increasingly having to get permission from the relevant government agencies and regulators in many countries around the world before they can begin operating. This will have several significant impacts on the way these companies normally like to do business:

  • It will lengthen the time to market for new products and services because regulatory approval may well have to be obtained first
  • It will make it harder to maintain secrecy for new products and services when many bodies around the world have to be consulted before launch
  • It will make it harder to launch new products and services (or upgrades to existing ones) at specific predictable times, because regulatory approvals are unpredictable
  • It will make it harder to do big-bang global launches for new products and services because of all the regulatory bodies involved

That’s a shame, because these are some of the areas where technology has the greatest potential to do good in the world and make real life-changing progress. That’s not to say it can’t still be done, but it will require substantial change in the way these companies do business.

Partnering May be the Best Approach

One possible approach is for tech companies to remain focused purely on the technological side of these innovations and allow others with specializations and experience in the respective industries they wish to enter take on the regulatory hazards. Apple CEO Tim Cook’s approach to this problem was articulated in an interview with the UK’s Telegraph newspaper last year:

“We don’t want to put the watch through the Food and Drug Administration (FDA) process. I wouldn’t mind putting something adjacent to the watch through it, but not the watch, because it would hold us back from innovating too much, the cycles are too long. But you can begin to envision other things that might be adjacent to it — maybe an app, maybe something else.”

I see one possible outcome of the approach articulated here as a partnership approach under which Apple would create hooks in hardware and software for third party devices to interact with the Watch and provide additional functionality subject to regulation. That wouldn’t hold Apple back from innovating on a predictable and rapid schedule, but would open the door to others more comfortable with the slow-moving regulatory approval process. It’s easy to imagine such an approach being used in a variety of other markets too, with appropriate partners chosen for specific projects or opportunities. Whether this approach or another is taken, however, it’s clear the future is going to be very different from the past. Big tech companies are going to have to get comfortable spending a lot more time engaging with governments in one way or another.

Huawei’s Push into the High End Depends on Continued Growth of its Honor

Last week, I had the pleasure to attend Huawei’s Analyst Summit in Shenzhen, China. I had the chance to hear senior management talk about the company’s strategy in the different segments it operates. While I am well aware of how wide Huawei’s reach into the mobile market is, both consumer and enterprise, I am always reminded at this event – this is my 4th – of how little the consumer business contributes to the overall company. Yet, this is by far the most visible part of what Huawei does.

In 2015, according to its internal numbers, Huawei’s high-end smartphone sales grew 72%, going from 18% in 2014 to 31%. Their stated target for 2016 is aggressive: 140+ million smartphones shipped where mid-end and high-end will contribute over 55% of the revenue.

Honor: the wind beneath the wings

Doing some quick math on the numbers takes me to the conclusion that, while more than half the revenue will be coming from mid and high-tier devices, Huawei will be relying on the Honor to continue to grow in volume.

Huawei created Honor in 2014 to try and diversify its go-to-market strategy following the success Xiaomi was having in China. Honor is mostly a direct-to-consumer business sold through Huawei online stores and select online partners like Amazon that supplement the main channel. When Honor was first launched, there was also an attempt to distance the brand from Huawei in order to penetrate markets such as the US where “Made in China” was under threat from privacy and security concerns.

Over time though, Honor has become more synonymous with affordable than alternative. The Honor 5X 16GB unlocked version currently sells on Amazon for $199.99. This is the entry price for Honor that increases to $399 for its high-end models. In these segments, Honor competes with the Huawei G and Huawei Y products.

At the event last week, there seemed to have been an urge to further distance Honor from Huawei to the point that very little was said about this line other than it is a totally separate brand. Yet, as Huawei wants its name to be associated with the high-end market, we could see a clearer split in portfolio with Honor becoming the volume lifter especially in markets such as the US where online sales are growing due to the decoupling of the phone and tariff costs. What will be interesting to see is how Huawei will develop a dual vs. single brand strategy. Whether under the Huawei or Honor brand, it is clear to me the mid-tier devices will be key in driving growth and, more importantly, economies of scales the high-end can benefit from. After all, we cannot ignore the fact that, even for Samsung, non-Galaxy and Note products still represent 30% of the installed base across markets.

Huawei’s opportunity in 2016 will be coming from more price sensitive markets

Looking at public data on purchase intention, Huawei scores highest in China. 10% of the online consumers interviewed said they will definitively consider purchasing what is now the top brand in China smartphone market according to Kantar Worldpanel ComTech. This compares to 5% for Xiaomi, 14% for Samsung and 18% for Apple. In Italy, where Huawei is currently the second most sold brand, 7% of connected consumers stated they would definitely consider purchasing Huawei. Most of the other markets do not go above 5%. In the US, where Huawei has yet to make much inroads, intention reaches only 3%. Considering both the skew to pre-pay as well as the more price sensitive nature of these opportunity markets, it is sensible to expect the lower end of the Huawei portfolio to play a more important role in converting sales. Overall, Honor products represent roughy 20% of the current Huawei installed base while the Huawei G and Y lines represent 30%. Too much of an overlap between the two brands might cause Huawei to compete with itself more than capture share from competitors, so careful consideration will be needed as there is certainly price overall.

Can Huawei really be successful in the high-end?

Succeeding in the high-end of the smartphone market is not for the faint-hearted. Across markets, that saturated segment is controlled by Apple and Samsung with very little erosion inflicted by other brands. Samsung has been losing market share to other Android vendors, Huawei in particular, but it has done so in the mid-tier segment with users that either did not own a Galaxy model or have older generations.

While Huawei has been delivering higher-end devices in specs, its prices have been more of a high-mid-tier portfolio. The launch of the P9 changed that, as the Chinese vendor priced the new flagship right where Samsung and Apple’s high-end products are positioned. Yet, while brand awareness is growing, consumers do not consider Huawei to be at the same level. The UK market is a very good example of that struggle. With almost half of the UK smartphones sales falling in the high-end, Huawei has been unable to grow share as significantly as in markets such as Italy, Spain and, more recently, Germany.

In the past, other brands that started as a value play struggled to change consumer perception. LG is a very good example of that inability where, even after the success of its Nexus 5, LG G4 and LG G5, it is not considered quite at the same level as homegrown competitor Samsung.

Huawei is certainly spending a lot of money and effort in strengthening its brand through brand ambassadors such as actress Scarlett Johansson, actor Henry Cavill, and soccer player Lionel Messi. CMO Glory Cheung also walked us through changes to the logo design that will be used more fluidly, depending on the channel and campaign. All of this is certainly helpful and shows how serious Huawei is at being a global player but sadly, it is not always a guaranteed recipe for success.

Huawei is certainly aware of the challenges it faces to reposition its brand and, with the new MateBook, it made the conscious decision to enter at a higher price point so the brand in the new segment can be associated more with design and higher spec than good value for money. The difficulty here is the lack of experience in the PC market as well as lack of enterprise credibility when it comes to channel strategy, device management, and customer support.

The key to success is to turn “Made in China” to a must have

Appealing to high-end buyers takes a mix of aspirational brand, good design, reliable and quality hardware and software. I have already discussed the effort on brand. There is no doubt Huawei’s design is strong and its smartphones have improved in quality and reliability. Software however, is not a strong point for the Chinese brand and, within the Android ecosystem where consumers have a wide selection of brands to choose from, software can make a difference. This is particularly true if software gets in the way of user experience.

Huawei has been successful at making “Made in China” appealing to its home market that historically has preferred foreign brands. Over the years, the tech hub has been moving from Japan to Korea and on to China. Yet, while for Japan and Korea there was a recognition there was value add in what came out from local vendors, China does not seem to have achieved the same status. This certainly has to do with the perception consumers have from other industries and the start of the smartphone market that China is not about talent but about cheap labor. Xiaomi and Huawei have shown this is no longer the case but, while Xiaomi’s business model is hard to replicate abroad, Huawei has the skills to take their success in China to other markets around the world. This effort however, will take time and a considerable amount of resources.

Is Firefox Search Worth $375M/Year to a Yahoo Buyer?

Marissa Mayer and Firefox: worth it?

Marissa Mayer and Firefox: can the marriage last? Photo of Marissa Mayer by Fortune Global Forum on Flickr.

Who stands to lose if Yahoo is sold — besides of course Marissa Mayer, who will probably lose her job along with a fair number of Yahoo staff? The surprising, and unobvious, answer is Mozilla and the Firefox browser.

That’s because Mozilla is highly dependent on a five-year contract with Yahoo, signed in December 2014, where it receives about $375m per year to make Yahoo the default search provider in the Firefox browser on the desktop. From 2004 to 2014, that contract was exclusively with Google; now it’s Yahoo in the US, Google in Europe, Yandex in Russia and Baidu in China.

How much is $375m per year compared to Mozilla’s spending? Most of it. Mozilla’s audited financials offer some useful details. They’re not as timely as a public company’s numbers; the most recent date to the end of 2014.

Mozilla’s numbers

In 2013, the Mozilla foundation recorded “royalties” (mainly, search income) of $306.1m out of total revenues of $314.1m; in 2014, 323.3m of $329.6m. Search income is about 97% of Mozilla’s total income.

It’s also clear Yahoo is paying Mozilla more than it got from Google. Marissa Mayer was reportedly so keen to secure the business, she made a preemptive bid that turned out to be far too high for the reality of a world where Firefox’s share on the desktop was falling and its position on mobile is minimal.

The question is, with Yahoo on the block, would a buyer of Yahoo want to continue with the Mozilla contract? It is a big drag on Yahoo’s spending. According to Yahoo’s financials, its Traffic Acquisition Costs (TAC) – the money it pays other companies to bring traffic to it – have rocketed.

Yahoo spending on traffic acquisition

Clearly, it’s spending a lot more both for display ads and for search. TAC can be a good thing: you pay a third-party site to bring people to you and then you make a profit by selling those people products or showing them ads.

Yahoo’s search TAC, in particular, has rocketed from a low of 0.7m in the first quarter of 2014 to $141m in the fourth quarter of 2015, just over a year after signing the deal with Mozilla.

That’s not all going to Mozilla. But digging into Yahoo’s financial statements, we can find out precisely how much it is paying.

In its annual report for 2015, Yahoo says: “Of the $350m increase in revenue and $660m increase in TAC for the year ended December 31, 2015, $394m and $375m were attributable to the agreement we entered into in November 2014 to compensate Mozilla for making us the default search provider on certain of Mozilla’s products in the United States (the “Mozilla Agreement”).”

(You might wonder: why is the increase in overall revenue smaller than the increase from Mozilla? It’s because Yahoo’s overall revenues fell.)

So Yahoo is paying $375m annually to Mozilla just to be the default search engine in Firefox on the desktop in the US. And it’s going to keep on paying. In the 3Q 15 report, it said: “The Company is obligated to make payments, which represent TAC, to its Affiliates. As of September 30, 2015, these commitments totaled $1,682 million, of which $100m will be payable in the [fourth quarter] of 2015, $401m will be payable in 2016, $400m will be payable in 2017, $375m will be payable in 2018, $375m will be payable in 2019, and $31 million will be payable thereafter.”

Given that $375m went to Mozilla in 2015, it seems likely the large part of those future sums are also bound for Mozilla.

But a future buyer might not want to stick with Mozilla because Yahoo’s TAC is beginning to get out of whack.

For comparison, Google’s TAC used to be between 23% and 25% of its ad and total revenues; more recently – since the end of the Mozilla contract – that has fallen below 20%.

As a proportion of search revenue, Yahoo’s search TAC has gone from a low of almost 1% of search revenue, to 27% in the fourth quarter of 2015. That’s bigger than Google’s TAC proportion. Yahoo’s problem is it doesn’t have the monopoly Google does and doesn’t monetise its advertising as well as Google. Google’s AdWords are a high-margin ad business. Yahoo offers display ads, which are a commodity.

The end of the search affair

So a Yahoo buyer would be very likely to look for a way to get out of the five-year Mozilla contract. How would that affect Mozilla?

Quite hard.

Mozilla’s expenses in 2013 were, mainly, $197.5m on “software development” (out of total costs of $295.4m); in 2014, that was $212.8m (of a total of $317.8m). “Software development” swallowed up about 65% of the royalty income in 2013; the same in 2014.

As Mozilla acknowledges, those “royalties” are payments from “various search engine and information providers”. What happens if one of those sources dries up?

Mozilla knows it’s at risk here. Under the “Concentrations of Risk” subheading, there’s this:

Mozilla entered into a contract with a search engine provider for royalties which expired in November 2014. In December 2014, Mozilla entered into a contract with another search engine provider for royalties which expires December 2019.

Approximately 90% of Mozilla’s royalty revenues were derived from these contracts for 2014 and 2013 with receivables from these contracts representing approximately 77% and 66% of the December 31, 2014 and 2013 outstanding receivables, respectively.

Yahoo, as we can see, is paying about $400m per year just for US search. How much did Google pay? In 2011, when it re-signed for three years, the estimate was that Google was paying just over $100m per year – for a worldwide deal. It seems likely the real figure was higher. But Mozilla relied on it. And the Yahoo money is even more needed as Mozilla tries to recover from the dead-end of Firefox OS on mobile.

Pulling the plug

Basically, if a new Yahoo owner pulls the plug on the search deal, Mozilla will have to seek a new contract in the US. But who’s going to be willing to step up? Microsoft, probably; but the price that Mozilla will be able to demand will be much lower than it got from Yahoo. Unless, of course, Google decides to step back in and push the bidding up. But its actions around the last auction suggest it wouldn’t be interested; Chrome is too dominant, and Firefox is dwindling.

So the next few weeks aren’t going to be tense just for Yahoo. There’s a whole team of software engineers working on Firefox and other products who will have to wonder about their future if Yahoo has a new owner.

Q1 2016 Earnings Season Preview

We’re about to embark on the earnings reporting season for Q1 2016 and I thought I’d do a quick preview of some of the big things to look out for as the major technology companies report earnings. I’m going to focus on just a handful of the biggest companies rather than try to be exhaustive.

Alphabet – April 21st

Alphabet has been all over the news for the past few weeks but, arguably, for many of the wrong reasons. There have been reports that Nest is underperforming relative to its targets, that Verily is seeing an exodus of talent, and the company itself has announced it’s looking to offload its robotics division. In some ways though, all of this is a sign of increased financial discipline at Alphabet under CFO Ruth Porat. As I predicted earlier, breaking out financial reporting for the Other Bets has brought greater scrutiny. It appears the company is starting to respond to that scrutiny by engaging in some belt-tightening. Some of the Other Bets are no doubt chafing at this clampdown, but I’d expect management both to argue it’s a good thing and the issues recently reported are less serious than suggested.

Amazon – April 28th

Amazon has been on something of a tear financially in recent quarters, with its AWS division growing like a weed and increasingly profitable to boot, while the core e-commerce business in North America also performs very strongly. As a result, it’s been able to elevate its operating profits above its usual anemic levels and demonstrate increasing dominance in the markets where it does well. However, I continue to question whether Amazon can achieve the same results in more than a handful of markets, so it’s worth looking for signs of success or failure in secondary markets during earnings. China and India are particularly interesting to watch, since Amazon’s usual playbook doesn’t really apply in those markets and it needs to think differently about how to be successful there.

Apple – April 25th

With Apple, the single biggest question is just how it will perform relative to the unusually pessimistic guidance it issued last quarter. This was forecast to be the first down quarter for iPhone sales, but the company didn’t issue specific guidance for exactly what those sales would be. Were they as bad as the company feared, or was the guidance overly negative? The second most interesting question is what guidance for the June quarter looks like. Recent reports suggest iPhone sales continue to be somewhat sluggish this quarter. The other thing worth watching for is guidance relating to the iPhone SE launch. There are two specific things I’m very curious about: one is any guidance around what the launch will do to goose iPhone sales, but the other is the impact on average selling prices for iPhones and margins, both of which could take a hit following the SE launch if it sells in any decent numbers.

Microsoft – April 21st

Microsoft outlined much of its vision for the year at its recent Build developer conference but there’s nothing there that will really move the needle financially. Meanwhile, Windows 10 continues to grow at a steady but not stellar pace and the latest estimates from many analyst firms suggest the PC market isn’t recovering despite repeated promises. This remains one of the single biggest questions about Microsoft’s future and I’d expect management to be asked about it on the earnings call. The other big question is Microsoft’s cloud business, which is not as clearly and explicitly broken out as Amazon’s AWS. Will Microsoft finally start providing more visibility over not just the revenue run rate but also the profitability of its cloud efforts? That’s something a number of shareholders – including Steve Ballmer – have been calling for and I think it’s something Microsoft will have to provide sooner rather than later.

Samsung – date not yet set

We’ve already had Samsung’s preliminary financial estimate for Q1 2016 but that was a very high-level report as always. The most interesting thing is the detailed breakdown of how the different units fared in Q1 and, especially, the two most important: mobile and semiconductors. The former has, of course, struggled over the last several years as the pincer movement between Apple at the high end and a variety of low-cost Android competitors at the low end have squeezed its business. However, last quarter there were signs the semi business was struggling too – was that a blip, or are there more reasons for concern in this segment this quarter? Early reports have suggested it did better, but the detailed numbers will be important here too. Also important will be the company’s commentary with regards to average selling prices and profit margins because it’s had to trade off those two variables against bolstering shipments recently. It’s likely it will continue to have to do so.

As usual, this should be a fascinating earnings season and I’ll be sharing my views on the actual results here and elsewhere as the next few weeks go by.

Apple’s Penchant for Consumer Security

At a security “deep dive” at Apple on Friday, executives went into depth on Apple security philosophy and technological approach to the matter. I’ve sat through many technology company’s technical briefings but never one from Apple which went deeper on custom silicon solutions than I had seen before. I’ll weave some technical tidbits I learned into this article but there was a theme which came up that struck me. More than a handful of times, presenters used the phrase “balancing security with ease of use”.

This seemed to be a key phrase and philosophy that is driving Apple’s thinking. The more I thought about it, the more it made sense in light of so many other security issues that exist in corporate, government, and other high-security environments where computers are used. You can build Fort Knox-level security into a personal computer but it would come at the expense of user experience — and oftentimes does. Apple is attempting something that seems unprecedented at an industry level. To bring industry leading security but do so by actually enhancing the user experience. Prior to Touch ID for example, many organizations required eight, and sometimes longer, PIN numbers. Imagine entering that many numbers every time you pick up your smartphone. To emphasize this point, Apple shared a great statistic: their average users unlocks their phones 80 times a day. Other reports state people look at their phones upwards of 130 times a day but those are less of the average and more the heavier users. Regardless, the simple act of logging into our phone via a secure form of login like passcodes or fingerprints is now taken for granted in much of Apple’s ecosystem when, just a few years ago, anyone could have stolen my phone and have access to my personal information. Here again, Apple shared that 89% of their users with a Touch ID-capable device have set it up and use it. In our own consumer study of iPhone owners, we learned 85% of respondents said they use either Touch ID or a pin to log in to their iOS device. Again, this seems unprecedented given where we were in consumer security just a few years ago. Touch ID is a clear example of enhanced security and enhanced user experience. It is difficult to objectively argue that logging in to our devices with Touch ID is not only faster, more natural, and more efficient than the old swipe to log in but it is also inherently more secure.

After sitting through the technical explanations of how Apple has specifically designed the interplay of custom silicon like the A-series processors, iOS, and the Secure Enclave coprocessor, I came to the realization that, while I knew the iPhone was a secure device, I really had no idea just how secure it actually is. It can’t be overstated how essential Apple’s custom designed silicon is to the security of iOS products. For example, in a Mac, running software designed by Apple but using a main CPU and GPU made by Intel/AMD/Nvidia, they have put security measures in place including encrypting the entire storage disk. However, with the custom A-series processors, custom designed secure enclave co-processor, and custom designed iOS, Apple is able to encrypt every single file on your iOS device, not just the entire disk.

Secure Enclave: A Security Designed Coprocessor

I came away from this discussion with a much greater appreciation of the Secure Enclave. Some details on this product are outlined in Apple’s Security White Paper, but we were given a bit more depth at this briefing. Yet I still desire a great deal more technical details should we be able to acquire them at some point. From the white paper, here is some detail on the Secure Enclave:

The Secure Enclave is a coprocessor fabricated in the Apple A7 or later A-series processor. It utilizes its own secure boot and personalized software update separate from the application processor. It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.

The Secure Enclave uses encrypted memory and includes a hardware random number generator. Its microkernel is based on the L4 family, with modifications by Apple. Communication between the Secure Enclave and the application processor is isolated to an interrupt-driven mailbox and shared memory data buffers.

Each Secure Enclave is provisioned during fabrication with its own UID (Unique ID) that is not accessible to other parts of the system and is not known to Apple. When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory space.

There is a great deal of security “magic” that happens in the Secure Enclave but this co-processors sits at the heart of Apple’s encryption techniques. Everything from booting up securely to individual file level encryption runs through the secure enclave. This means someone can’t hack into just part of my phone and get some of the data. It is all protected and encrypted. A hacker needs my passcode or she gets nothing. There is no middle ground. Apple’s security designed ecosystem runs through a series of trusted chains with the secure enclave at the center. It is a deep system of trust built from the silicon up.

The Security Battle

What I find most interesting about Apple’s story around security is how it goes much deeper than a feature. While security, in this case, could be perceived as a feature, my read on what Apple is doing is going a step beyond simply making security a feature and making it a priority. It is a deep guiding philosophy to which Apple appears to be unwaveringly dedicated. In an age where billions of consumers are now using computers more often than they ever did at any point in history, it is clear we are in a new era of consumer computing being led by smartphones. Looking back historically at the efforts of hackers in the PC era, one can only imagine it would be magnitudes worse in this era with more people online than ever. Some may argue Apple is emphasizing and picking this battle when consumers really don’t care much about security and privacy. The big debate about how much consumers care about security is certainly a valid one. What I appreciate about Apple’s efforts is they are making it so consumers don’t have to care. Apple is simply doing it anyway and going out of their way to ensure consumers have the best security possible at the moment and making secure environments the default while also enhancing the user experience. Which is not only the way it should be, but it is the right thing to do.

How Many Wireless Competitors Should There Be?

Earlier this week, the Wall Street Journal reported that the U.K. regulator, Ofcom, is opposing the proposed merger between Three and O2. In France, regulators have put obstacles in front of a proposed merger between state-controlled Orange and Bouygues. In both of these countries, these deals would have reduced the number of facilities-based wireless competitors from four to three. This recalls successful efforts to block similar consolidation in the U.S. (AT&T/T-Mobile and then Sprint/T-Mobile).

Regulators want a healthy level of competition in the wireless business. But at a certain point, this is not working out economically. There is no country where there are four healthy national facilities-based wireless operators. Let’s look at the U.S. market. We’ve seen vigorous competition over the past three years, with data prices that have fallen by 30-40%. Yet Sprint continues to struggle and most financial analysts are very cautious on the prospects for the company. Their network has improved to the extent it is “in the ballpark” competitively. But Sprint is behind where we all thought they would be in leveraging their most significant competitive asset: their 120 MHz of 2.5 GHz spectrum. As they sit on this treasure trove of ‘real estate’ (DISH continues to hoard its spectrum as well), AT&T, Verizon, and T-Mobile are preparing to spend upwards of $30 billion in the 600 MHz auctions in order to contend with continually growing demand for data capacity and the desire to offer greater speed. Facebook’s introduction of its Live video hub at F8 this week has probably sent an additional chill down the spines of operator CTOs.

Meanwhile, on the broadband side, it was front page news in the Boston Globe and the tech press that Verizon will be bringing FiOS to the city of Boston. So, consumers and businesses in the #2 tech epicenter in the world, after Silicon Valley, will have a choice of two – two!! – broadband competitors once FiOS is launched. And still, according to the FCC, some 45% of Americans do not have a choice of more than one broadband provider (defined as offering a minimum 25 Mbps download service). Broadband prices in the U.S. remain high, especially compared to much of western Europe and Asia, where there is more competition and rules in some countries requiring facilities-based providers to open up their networks for resale.

There is something seriously wrong, and lopsided, about this picture. And even with all the growth projected in wireless, with the ‘billions of connected things’ and so on, I think it will be very difficult for any country to support more than three viable facilities-based competitors. Just look at the spending picture ahead. First, it seems de rigueur now in most developed economies that operators have to pay a lot of money for spectrum. And much of the next wave of network investing will be on deploying outdoor and indoor small cells, to both provide more surgical levels of coverage and to increase capacity. It is expensive to deploy small cells. It is also a challenge to deploy them in large numbers because of siting issues and backhaul. Having 4-5 operators each trying to deploy a network increasingly dependent on smaller cells just does not compute. And how will this work indoors? On top of this is the billions of dollars required to deploy some flavor of 5G over the next ten years.

The other factor in all this is that macro cellular, small cells, and Wi-Fi are coming together, over time. With LTE-U (and the standards-based LAA), services will work more harmoniously between licensed and unlicensed spectrum. There are some examples of operators being successful with hybrid fixed and mobile networks, leveraging Wi-Fi hotspots, in some of the more densely populated cities in Europe.

Add to this mix some leading edge work being undertaken by Starry, Facebook’s Terragraph, Google, the incumbent wireless operators as part of the 5G roadmap, and others, to leverage higher frequency spectrum. I believe we could see some new broadband providers, especially in cities, over the next five years, with less of a boundary/distinction between fixed and mobile.

Rather than relying on an older-guard framework, it would be a more effective exercise to think about a 2020 construct, where the breadth of licensed and unlicensed spectrum is more effectively and efficiently utilized and shared, and where the resources of fixed and mobile networks are leveraged. With all of these possibilities, we should be less fearful of consolidation than perhaps we were a couple of years ago.

The Limits of Bots

We’ve now had two major developer events in a row where chat bots were a significant theme, with both Microsoft’s Build and now Facebook’s F8 focusing on this rapidly emerging new form of interaction with companies and brands. With two such big names behind the trend, it’s easy to get caught up in the hype and enthusiasm these companies obviously share for the technology. But it’s important to stay grounded as we evaluate chat bots as a potential successor to today’s app model.

Incentives matter

The first thing to note is Facebook and Microsoft have strong incentives to pursue the bot vision. Both companies failed to make a meaningful dent in the mobile operating system battle and, as such, find themselves in secondary roles as makers of apps that run on other people’s platforms. This shuts them out of many of the opportunities associated with owning a mobile operating system and puts them perennially in a secondary position, having to work around the limitations placed on third party apps and the inherent disadvantages they face relative to pre-installed applications. So it’s not surprising both companies are now embracing what – in at least some visions of the future – promises to be the replacement for mobile apps. But it’s important to keep these incentives in mind in evaluating their claims about the potential of bots – Facebook and Microsoft have a massive vested interest in seeing this trend succeed.

Culture matters too

The other thing that matters enormously is culture. From conversations I’ve had recently with proponents of the chat bot model as a successor to mobile apps, it’s clear their arguments are strongest in cultures where messaging has become the dominant model of interacting with the world, for everything from intimate conversations with significant others to confirmation emails for food orders or plane tickets. However, this culture isn’t universal by any means – it’s far stronger in some parts of the world such as Asia and probably weakest in markets like the US. The version of the future in which bots dominate is highly dependent on a present where messaging dominates, and that simply isn’t the case in all markets today. The app model dominates in many markets today and such adoption of bots requires a significant mind shift by users and a break with what are now fairly well formed habits built around apps. There are, of course, also major generational differences around messaging behaviors.

Where bots work

The fact is, chat bots work well for certain interactions which have specific characteristics:

  • The interactions are quick
  • The interactions are simple
  • Context is available to the bot
  • Users maintain control
  • Users haven’t made existing investments in apps

Let’s discuss each of those briefly.

Interactions must be quick. If a conversation with a bot takes more than a handful of messages, it almost certainly would have been quicker and more efficient using a touch-based web or app interface. The fewer the number of back-and-forth exchanges, the better suited a task is to a bot interface. Typing out endless responses to questions rather than simply pressing buttons makes the model unworkable.

Interactions must be simple. If the user has too many options to choose from, the bot model becomes unworkable. On stage at F8 on Tuesday, Messenger head David Marcus demonstrated the process of buying a pair of sneakers from an online store through a bot. But the bot only presented him with five possible options to choose from. For the sake of simplicity, that’s where the demo ended. But the reality is, the odds one of these five pairs of shoes would be the right one in real life are low. I’ve tested the same bot and there are ways to get more options beyond the five, but they come in additional batches of five only, based on a request to see more. For anyone who’s browsed an online store on a website or in an app with a massive grid of options, this five-at-a-time experience is likely to be frustrating. It only works well when there are limited choices available and the user can quickly burrow down to the right one.

The bot uses context in the same way an app does – there’s a history between the customer and the company, cues such as location are available, and so on. Facebook’s “weather cat” bot Poncho seemed to fall flat in the hours following its launch on Tuesday by prompting users to provide a location in text, many of which it didn’t recognize. It did also provide the option to send the current location through the standard Messenger interface, but many users seem not to have understood this was possible. But reducing the load on the user by pulling in all relevant cues and context will be critical to making bot interactions efficient. Tying users to existing customer profiles is clearly part of this as well and Facebook has a limited number matching system for doing this.

Users have to maintain control in order to give bots permission to enter what can otherwise be a very personal space – their messaging app. There were several failures on this front in the first day of Facebook’s new bots. News apps like CNN took any message as permission to spam the user indefinitely with breaking news without asking for an opt-in. Many of the bots appear to send news messages (and thus trigger OS-level notifications) by default until the bot is blocked, which is awkward terminology to say the least, and probably overly heavy handed except for the fact there seems no other obvious way to stop them from sending messages.

Perhaps the biggest challenge, though, is many users will have made existing investments of time (and in some cases, money) in mobile apps for some of the same interactions bots will now offer. Why would someone who’s made that investment in a dedicated app switch to using a bot? The biggest opportunity here may well lie in the same space Google’s app streaming model is intended to target – new or temporary interactions where the user hasn’t made an investment and/or doesn’t want to. Examples would be one-off purchases from a new store, booking a restaurant in a new city while on vacation or a business trip, or checking transit times while your car is in the shop for a day.

A “sometimes” solution

Sesame Street’s Cookie Monster, in his more responsible modern incarnation, now refers to his beloved cookies as “a sometimes food” in contrast to fruits and vegetables and other foods that can be eaten more regularly. What’s clear from everything I’ve outlined above is bots are – for today at least – a “sometimes solution” for interactions with companies and brands. There will be some interactions for which they work and work well, but many others for which they’re too cumbersome, too inefficient, underperforming, and simply require too much of a learning curve to be useful or pleasant. The challenge for Facebook, Microsoft, and every other company pitching the bot vision is to refine that vision to meet the specific challenges for which it works, and execute well on those, rather than to continue to spread the idea bots will replace apps for essentially all interactions. That won’t happen, at least anytime soon, and these companies are doing themselves, their users, and their paying customers, a disservice when they pretend it will.

The PC Decline: Four Key Points to Note

The PC market is in decline

The PC industry isn’t getting any healthier. Both IDC’s and Gartner’s figures for PC shipments in the first quarter are out and both show the same direction of travel: down 11.5% to 60.6m units for IDC, down 9.6% to 64.8m units for Gartner.

There are some variations between what the two count as a “PC”: IDC doesn’t include Windows tablets, 2-in-1s such as the Lenovo Yoga, but does include Chromebooks; Gartner doesn’t include Chromebooks, but does include 2-in-1s and Windows tablets. (You might think comparing the two datasets would make it easy to spot who is doing well in Chromebooks, and who isn’t doing well in 2-in-1s or Windows tablets. Sadly, that turns out not to be true; but we can say Chromebooks account for only a few million sales per year and there aren’t any clear signs of that changing.)

At the moment, PC shipments have receded below the point they were at in 3Q 2006 (by IDC’s data); that’s nearly a decade of progress wiped out. It’s a category in retreat. The peak was in 3Q 2011, at 96.1m (IDC) or 95.4m (Gartner).

There are a few points to note in what’s going on.

The Long Slow Goodbye

First, Windows PCs are in serious long-term decline, more so than Apple. Once you subtract Apple’s contribution, you find Windows shipments have been in a year-on-year decline for 15 straight quarters. That’s nearly four years.

Windows PC shipments are falling

Something Happened

Second, the trigger for that decline is entirely unlike the trigger for previous declines in PC shipments. If you look at the long-term view, there are three points since 1999 when Windows PC shipments have shrunk: near the end of 2001 (during a worldwide recession), the end of 2008 (during the global financial crash), and towards the end of 2012.

Three points where Windows shipments fell

What happened in 2012? Nothing external. Instead, the decline seems to have been prompted by the twin thrust of tablets and smartphones. Tablets became broadly available (the iPad 2 dropped in price) and smartphone screens grew larger (the first Samsung Galaxy Note, with a 5.3in screen, had been launched at the end of 2011, and the Note 2 was coming along).

Ben Thompson, writing at Stratechery ($subscription), suggests PCs are being disrupted. What we’re seeing is “less capable” and cheaper devices (tablets and smartphones) taking over the jobs that were being done by much more capable devices. But the users didn’t generally need that capability; browsing, email, writing text, organising and uploading photos and watching video, plus a few games, tended to fill the gamut of what most people need to do on a computer.

Certainly, all the signs are that Thompson is right. That 2012 decline points strongly to a change and, at Gartner, Mikako Kitagawa says, “The ongoing decline in US PC shipments showed that the installed base is still shrinking, a factor that played across developed economies.” A shrinking PC installed base? That must be quite a worry. But it’s what we’re seeing.

Linn Huang of IDC offers something of a hostage to fortune: “The PC market should experience a modest rebound in the coming months.”

The Others

Third, anyone in the “Other” category is probably getting pretty worried now. Even Acer has fallen out of the top five computer makers, displaced by Apple. (I continue to include it, estimating its volumes as slightly below Apple’s.)

"Other" PC suppliers, absolute shipments

"Other" PC players are being squeezed out

If you take “Other” to be companies such as Toshiba, Samsung, Fujitsu, and a myriad of others, then both their share and absolute number of shipments is going in a bad direction:
• Toshiba is struggling with an accounting scandal, and its PC shipments in the US fell by at least 25% in the first quarter, from 0.94m to less than 0.71m. The “Lifestyle” division, which includes the PC business, has seen revenues nearly halved since the calendar first quarter of 2014, and made an operating loss for the past 15 quarters.
• Samsung has withdrawn from Europe’s PC business and is hard to see elsewhere; its PC business is rolled into its IM division, which also includes its mobile division, for accounting. In Q4 2015 (the latest quarter for which there are figures) those revenues were US$780m – which would translate to 1m PCs sold at an ASP of $780, or more possibly 1.5m sold at an ASP of $520, or 2m at $390. (To make the top five for Q4 required shipment volumes of at least 5m.)
• For the fourth quarter, Fujitsu says revenues from PCs fell; it’s been saying that for at least a year.

There’s also a squeeze on Asian sub-scale PC makers, because the dollar’s strength hits them hard when they have to source components from the US (Microsoft Windows, perhaps?). That eats into profitability while the bigger players corner more of the market.

All this is going to lead to further consolidation. There’s already talk that Toshiba or Fujitsu will sell their PC businesses. Smaller companies might just shut up shop or seek a buyer.

Apple: Doing Fine, Thanks

Fourth, Apple is still solid. The USB-C Macbook is a year old and doesn’t show any sign of having set the PC world on fire but since the end of 2004 (a period stretching 46 quarters), Windows PCs have only seen faster than Apple growth in two quarters. Even while Apple’s total shipments (according to IDC and Gartner, ahead of Apple’s formal results later this month) fell in the most recent quarter, it was still nothing like the overall fall for Windows PC makers. Only Dell managed to stay upright better on the slippery slope, falling by 2.0% against Apple’s 2.1% (IDC); Gartner reckons Apple did better, showing 1.0% growth, but was bested by Asus with a 1.5% rise – perhaps through Windows tablet or convertible shipments.

What neither shows is that Apple commands the highest average prices in the industry. Once the figures for this quarter are in I’ll return to the topic but, for now, here are the figures for average selling prices (calculated from company financials and IDC shipment figures) for the big six PC companies:

Average selling prices for top PC makers

As you can see, Apple is miles above the rest there. That also means it can grab a healthy profit, which allows it to stay in business when others struggle.

Conclusion

For the longer term? PCs are contracting towards a core base of users who really want or need them. If people want to be able to plug in USB sticks or SD cards, there’s a PC there for them. But it turns out that lots of people don’t and they’re voting with their wallets. That’s creating a squeeze on the smaller players, but even the big players don’t have it easy – unless, like Apple, they can charge a premium.

Super Computers and Autonomous Vehicles

At last week’s nVidia GTC conference, I spent a lot of time talking to people at the show about nVidia and their partners’ autonomous vehicle plans. nVidia’s CEO Jen-Hsun Huang used his keynote to introduce an updated version of Nvidia’s Drive PX system for use in autonomous vehicles. Dubbed the Drive PX 2, this is basically a supercomputer on a board that can sit in the trunk of a car. A demo showed a car that was able to learn to drive on main roads as well as uncharted dirt roads by itself with only 3,000 hours of training. It includes HD mapping tools and can sense, plan, and react to all types of road and driving conditions.

The operative word here is “supercomputer”. Over the last 10 years, nVidia has created some of the world’s fastest processors around their GPU architecture and actually announced a breakthrough product they call the world’s first supercomputing system dedicated to deep learning. It is called the DGX-1. This system stacks up to eight Tesla P100 processors on top of each other and delivers 170 teraflops in a box and 2 petaflops in a rack at a breakthrough price of $129,000.

This architecture pushes their commitment in supercomputers to a new level, and a new price range, and was one of the biggest announcements at GTC. However, nVidia’s focus in supercomputers is trickling down to their work in autonomous vehicles too.

That became clearer when Gill Pratt, the CEO of Toyota Research Institute delivered the keynote on Thursday morning and emphasized their partnership with nVidia and the role a supercomputing-like system in a car will play in their future autonomous vehicle plans.
 
Mr. Pratt pointed out that the #1 reason Toyota has made a commitment to self-driving cars is because “The fact that we tolerate 1.2 million people killed per year is astounding, and it’s a shame. It far exceeds the number of people killed in war.”

My Friend Dean Takahashi over at Venture Beat summarized Mr. Pratt’s thinking on Toyota’s self-driving car strategy: 

In Mr. Pratt’s keynote he stated that there are various levels of autonomy needed to make self-driving cars safer. One is to enable an immediate hand-off to a car when there’s an emergency that requires a human driver. Another is to give warning of perhaps 30 seconds about when a driver should take over, and the final level of autonomy is a complete self-driving car that handles all emergencies.

He also said there isn’t just one way to tackle the safety challenge. One is a series method, where a human-like commander issues explicit instructions for what the robot car should do. That’s not a very promising approach.
Another is to use a chauffeur-like system, where the car is driving and the human isn’t in control. Another is a parallel autonomy system. Pratt explained this using an analogy of teaching a child to hit a golf ball. The parent stands behind the child and holds the club together with the child. They swing together until the child gets the rhythm of the movement and takes over.

This is similar to what could happen with a parallel autonomy system, where the human can teach the car to drive and then have the car take over when there’s an immediate need for it. It leverages the human brain most of the time but takes over, much like in the case of anti-locking brakes, when it is needed most. Pratt refers to this as the “guardian angel” approach.

“The parallel autonomy has tremendous promise, and techniques like deep learning can be brought to bear on this problem, he said. “Toyota is conducting self-driving car tests in a huge simulator and Nvidia is assisting in that effort.”

This is the first time I have heard this issue spelled out clearly and, for those at the keynote, it gave us a sense that Toyota will play a key leadership role in the development of safe, self-driving cars. In fact, Mr. Pratt went on to say the research they are doing is so important to the overall safety of the public that they are opening up much of this research to their competitors. He called this process “co-opetition”. 

But I see nVidia playing a major role in this area of co-opetition too. I spent some time with Danny Shapirio, nVidia’s guru of their smart vehicle program and he showed me the motherboard of the Drive PX 2 supercomputer being designed into vehicles of many of their automotive partners. 

Their system is based on the kind of neural networks that will be needed to process all of the key decisions that need to sense, plan and react to just about every type of driving situation imaginable. I realize most of the big semiconductor companies have chips they are pushing for use in autonomous vehicles but, as I walked away from seeing this supercomputing motherboard, I thought to myself that, if I was in a self-driving car, I would personally want a high-powered supercomputer piloting it. 

I suspect this is the thinking behind many of nVidia’s automobile customers too. The work nVidia is doing internally, coupled with the new Drive PX 2 system, makes them one of the most important semiconductor companies tackling the problems and challenges in delivering an autonomous vehicle. From what I saw at the event, it may take supercomputing-level processors to deliver the kind of ultra-safe autonomous vehicles in our future.

Another good read on this subject comes from The Verge and its interview with the CEO of Ford. I highly recommend it as it lays out the thinking of top Ford officials and their roadmap for self-driving vehicles.

There is A Revolution Ahead and It Has A Voice

During the early computer era of the 1960s, it was thought there would only be the need for a few dozen computers. By the 1970s, there were just over 50,000 computers in the world.

Computers have grown in power by orders of magnitude since. They have become more intelligent in the way they interact with humans, starting with switches and buttons then punch cards and on to the keyboard. Along the way, we added joysticks, the mouse, track pads and the touch screen.

Newspaper
Newspaper image projecting the number of computers in the world in 1967

As each paradigm replaced the last, productivity and utility. In some cases, we cling on to the prior generation’s system to such a degree we think the replacement is little more than a novelty or a toy. For example, when the punch card was the fundamental input system to computers, many computer engineers thought a direct keyboard connection to a computer was “redundant” and pointless because punch cards could move through the chute guides 10 times faster than the best typists at the time [1]. The issue is we see the future through the eyes of the prior paradigms.
Punch card
Punch card stack of customer data

We Had To Become More Like The Computer Than The Computer had to Become More Like Us

All prior computer interaction systems have one central point in common. They force humans to be more like the computer by forcing the operator to think through arcane commands and procedures. We take it for granted and forget the ground rules we all had to learn and continue to learn to use our computers and devices. I equate this to learning any arcane language that requires new vocabularies as new operating systems are released and new or updated applications are available.

Is typing and gesturing the most efficient way to interact with a computer or device for the vastly most common uses most people have? All of the cognitive (mental energy) and mechanical (physical energy) loads we must induce for even trivial daily routines is disproportionately high for what can be distilled to a yes or no answer. When seen for what it truly is, these old ways are inefficient and ineffective for many of the common things we do with devices and computers.

Keypunch station
IBM Keypunch station to encode Punch Cards

What if we didn’t need to learn arcane commands? What if you could use the most effective and powerful communication tool ever invented? This tool evolved over millions of years and allows you to express complex ideas in very compact and data dense ways yet can be nuanced to the width of a hair [2]. What is this tool? It is our voice.

The fundamental reason humans have been reduced to tapping on keyboards made of glass is simply because the computer was not powerful enough to understand our words let alone even begin to decode our intent.

It’s been a long road to get computers to understand humans. It started in the summer of 1952 at Bell Laboratories with Audrey (Automatic Digit Recognizer), the first speaker independent voice recognition system that decoded the phone number digits spoken over a telephone for automated operator assisted calls [3].

At the Seattle World’s Fair in 1962, IBM demonstrated its “Shoebox“ machine. It could understand 16 English words and was designed to be primarily a voice calculator. In the ensuing years there were hundreds of advancements [3].

Shoebox
IBM Shoebox voice recognition calculator

Most of the history of speech recognition was mired in speaker dependent systems that required the user to read a very long story or grouping of words. Even with this training, accuracy was quite poor. There were many reasons for this, much of it was based on the power of the software algorithms and processor power. But in the last 10 years, there have been more advancement than in the last 50. Additionally, continuous speech recognition, where you just talk naturally, has only been refined in the last 5 years.

The Rise Of The Voice First World

Voice based interactions have three advantages over current systems:

Voice is an ambient medium rather than an intentional one (typing, clicking, etc). Visual activity requires singular focused attention (a cognitive load) while speech allows us to do something else.

Voice is descriptive rather than referential. When we speak, we describe objects in terms of their roles and attributes. Most of our interactions with computers are referential.

Voice requires more modest physical resources. Voice based interaction can be scaled down to much smaller and much cheaper form-factors than visual or manual modalities.

The power of voice-based systems has grown powerful with the addition of always-on systems combined with machine learning (Artificial Intelligence), cloud-based computing power and highly optimized algorithms.

Modern speech recognition systems have combined with almost pristine Text-to-Speech voices that so closely resemble human speech, many trained dogs will take commands from the best systems. Viv, Apple’s Siri, Google Voice, Microsoft’s Cortana, Amazon’s Echo/Alexa, Facebook’s M and a few others are the best consumer examples of the combination of Speech Recognition and Text-to-Speech products today. This concept is central to a thesis I have been working on for over 30 years. I call it “Voice First” and it is part of an 800+ page manifesto I have built based around this concept. The Amazon Echo is the first clear Voice First device.

The Voice First paradigm will allow us to eliminate many, if not all, of these steps with just a simple question. This process can be broken out to 3 basic conceptual modes of voice interface operations:

Does Things For You – Task completion:
– Multiple Criteria Vertical and Horizontal searches
– On the fly combining of multiple information sources
– Real-time editing of information based on dynamic criteria
– Integrated endpoints, like ticket purchases, etc.

Understands What You Say – Conversational intent:
– Location context
– Time context
– Task context
– Dialog context

Understands To Know You – Learns and acts on personal information:
– Who are your friends
– Where do you live
– What is your age
– What do you like

In the cloud, there is quite a bit of heavy lifting working at producing an acceptable result. This encompasses:

  • Location Awareness
  • Time Awareness
  • Task Awareness
  • Semantic Data
  • Out Bound Cloud API Connections
  • Task And Domain Models
  • Conversational Interface
  • Text To Intent
  • Speech To Text
  • Text To Speech
  • Dialog Flow
  • Access To Personal Information And Demographics
  • Social Graph
  • Social Data

The current generation of voice-based computers have limits on what can be accomplished because you and I have become accustomed to doing all of the mechanical work of typing, viewing, distilling, discerning and understanding. When one truly analyzes the exact results we are looking for, most can be answered by a “Yes” or “No”. When the back-end systems correctly analyze your volition and intent, countless steps of mechanical and cognitive load is eliminated. We have recently entered into an epoch where all elements converged to make the full promise of an advanced voice interface truly arrive. W. Edwards Deming [4] quantified the many steps humans need to complete to achieve any task, This was made popular by his thesis of The Shewhart Cycles and PDSA (Plan-Do-Study-Act) Cycles we have become trained to follow when using any computer or software.

The Shorter Path: “Alexa, whats’s my commute look like?”

A Voice First system would operate on the question and calculate the route from the current location to the location that may be implied by time of day and typical destination.

An app system would require you to open your device, select the appropriate app, perhaps a map, localize your current location, pinch and zoom to the destination, scan the colors or icons that represent traffic and then estimate an acceptable insight based on taking in all the information viably present and determine the arrival time.

The implicit and explicit: From Siri And Alexa To Viv

Voice First systems fundamentally change the process by decoding volition and intent using self learning artificial intelligence. The first example of this technology was with Siri [5]. Prior to Siri, systems like Nuance [6] were limited to listening to audio and creating text. Nuance’s technology has roots in optical character recognition and table arrays. The core technology of Siri was not focused just on speech recognition but rather focused primarily on three functions that complement speech recognition:

  1. Understanding the intent (meaning) of spoken words and generating a dialog with the user in a way that maintains context over time, similar to how people have a dialog
  2. Once Siri understands what the user is asking for, it reasons how to delegate requests to a dynamic community of web services, prioritizing and blending results and actions from all of them
  3. Siri learns over time (new words, new partner services, new domains, new user preferences, etc.)

Siri was the result of over 40 years of research funded by DARPA. Siri Inc. was a spin off of SRI Intentional and was a standalone app before Apple acquired the company in 2011 [5].

Dag Kittlaus and Adam Cheyer were cofounders of Siri Inc. and originally planned to stay on and guide their vision on a Voice First system that motivated Steve Jobs to personally start negotiations with Siri Inc.

Siri has not lived up to the grand vision originally imagined by Steve. Although improving, there has yet to be a full API and skills performed by Siri have not yet matched the abilities of the original Siri app.

This set the stage for Amazon and the Echo product [7]. Amazon surprised just about everyone in technology when it was announced on November 6, 2014. This was an outgrowth of a Kindle e-book reader project that began in 2010 and the acquisition of voice platforms it acquired from Yap, Evi, and IVONA.
Amazon Echo
Amazon Echo circa 2015

The original premise of Echo was to be a portable book reader built around 7 powerful omni-directional microphones and a surprisingly good WiFi/Bluetooth speaker (with separate woofer and tweeter).  This humble mission soon morphed into a far more robust solution that is just now taking form for most people.

Beyond the the power of the Echo hardware is the power of Amazon Web Services (AWS) [8]. AWS is one of the largest virtual computer platforms in the world. Echo simply would not work without this platform. The electronics in Echo are not powerful enough to parse and respond to voice commands without the millions of processors AWS has at its disposal. In fact, the digital electronics in Echo are centered around 5 chips and perform little more than recording a live stream to AWS servers and sending the resulting audio stream back to be played through the analog electronics and speakers on Echo.

digital electronics board
Amazon Echo digital electronics board

Today with a run away hit on their hands, Amazon recently opened up the system for developers with the ASK program [10]. Amazon also has APIs that connect to an array of home automation systems. Yet simple things like building a shopping list and converting it to an order on Amazon is nearly impossible to do.
skills flowchart
Sample of Alexa ASK skills flowchart

Echo is a step forward from the current incarnation of Siri, not so much for the sophistication of the technology or the open APIs, but for the single purpose dedicated Voice First design. This can be experienced after a few weeks of regular use. The always on, always ready low latency response creates a personality and a sense of reliance as you enter the space.

Voice First devices will span the simple to the complex to the reasonably sized to a form factor not much larger than a Lima beam.

Voice First device
Hypothetical Voice First device with all electronics in ear including microphone, speaker, computer, WiFi/Bluetooth and battery

The founders of Siri spent a few years thinking about the direction of Voice First software after they left Apple. The results of this thinking will begin with the first public demonstrate of Viv this spring. Viv [11] is the next generation from Dag and Adam picking up where Siri left off. Viv was developed at SixFive Labs, Inc (the stealth name of the company) with the inside “easter egg” element that the roman numerals are VI V gave a connection to the Viv name.

Viv is orders of magnitude more sophisticated in the way it will act on the dialog you create with it. The more advanced models of Ontological Recipes and self learning will make interactions with Viv more natural and intelligent. This is based upon a new paradigm the Viv team has created called “Exponential Programming”. They have filed many patents central to this concept. As Viv is used by thousand to millions of users, asking perhaps thousands of questions per second, the learning will grow exponentially in short order. Siri and the current voice platforms currently can’t do anything that coders haven’t explicitly programmed it for. Viv solves this problem with the “Dynamically Evolving Systems” architecture that operates directly on the nouns, pronouns, adjectives and verbs.
Viv flow chart
Viv flow chart example first appearing in Wired Magazine in 2015

Viv is an order of magnitude more useful than currently fashionable Chat Bots. Bots will co-exist as a subset to the more advanced and robust interactive paradigms. Many users will first be exposed to Chat Bots through the anticipated release of a complete Bot platform and Bot store by Facebook.

Viv’s power comes from how it models the lexicon of each sentence with each word in the dialog and acts on it in parallel to produce a response almost instantaneous. These responses will come in the form of a chained dialog that will allow for branching based on your answers.

Viv is built around three principles or “pillars”:

It will be taught by the world
it will know more than it is taught
it will learn something every day.

The experience with Viv will be far more fluid and interactive than any system publicly available. The results will be a system that will ultimately predict your needs and allow you to almost communicate in the shorthand dialogs found common in close relationships.

In The Voice First World, Advertising And Payments Will Not Exist As They Do Today

In the Voice First world many things change. Advertising and payments will particularly be changed and, in themselves, become new paradigms for both merchants and consumers. Advertising as we know it will not exist primarily because we would not tolerate commercial intrusions and interruptions in our dialogs. It would be equivalent to having a friend break into an advertisement about a new gasoline.

Payments will change in profound ways. Many consumer dialogs will have implicit and explicit layered Voice Payments with multiple payment type. Voice First systems will mediate and manage these situations based on a host of factors. Payments companies are not currently prepared for this tectonic shift. In fact, some notable companies are going in the opposite direction. The companies that prevail will have identified the Ontological Recipe technology to connect merchants to customers.

This new advertising and payments paradigm actually form a convergence. Voice Commerce will become the primary replacement for advertising and Voice Payments are the foundation to Voice Commerce. Ontologies [12] and taxonomies [13] will play an important part of Voice Payments. The shift will impact what we today call online, in-app and retail purchases. The least thought through of the changes is the impact on face to face retail when the consumer and the merchant interact with Voice First devices.

Of course Visa, MasterCard and American Express will play an important part of this future and all the payment companies between them and the merchant will need to rapidly change or truly be disrupted. The rate of change will be more massive and pervasive than anything that has come before.

This new advertising and payments paradigm will impact every element of how we interact with Voice First devices. Without human mediated searches on Google, there is no pay-per click. Without a scan of the headlines at your favorite news site, there is no banner advertising.

The Intelligent Agents

A major part of the Voice First paradigm is a modern Intelligent Agent (also known as Intelligent Assistant). Over time, all of us will have many, perhaps dozens, interacting with each other and acting on our behalf. These Intelligent Agents will be the “ghost in the machine” in Voice First devices. They will be dispatched independently of the fundamental software and form a secondary layer that can fluidly connect between a spectrum of services and systems.

Voice First Enhances The Keyboard And Display

Voice First devices will not eliminate display screens as they will still need to be present. However, they will be ephemeral and situational. The Voice Commerce system will present images, video and information for you to consider or evaluate on any available screen. Much like AirPlay but with locational intelligence.

There is also no doubt keyboards and touch screens will still exist. We will just use them less. Still, I predict in the next ten years, your voice is not going to navigate your device, it is going to replace your device in most cases.

The release of Viv will influence the Voice First revolution. Software and hardware will start arriving at an accelerated rate and existing companies will be challenged by startups toiling away in garages and on kitchen tables around the world. If Apple were so inclined, with about a month’s work and a simple WiFi/Bluetooth Speaker with a multi-axis microphone extension, the current Apple TV [14] could offer a wonderful Voice First platform. In fact, I have been experimenting with this combination with great success on a food ordering system. I have also been experimenting on the Amazon Echo platform and built over 45 projects, one of which is a commercial grade hotel room service application complete with food ordering and a virtual store and mini bar along with thermostat setting and light controls for a boutique luxury hotel chain.

I am in a unique position to see the Voice First road ahead because of my years as a voice researcher and payments expert. As a coder and data scientist for the last few months, I have built 100s of simple working demos in the top portion of the Voice First use cases.

A Company You Never Heard Of May Become The Apple Of Voice First

The future of Voice First is not really an Apple vs. Amazon vs. Viv situation. The market is huge and encompasses the entire computer industry. In fact, I assert many existing hardware and software companies will have a Voice First system in the next 24 months. In addition to companies like Amazon and Viv, the first wave already have a strong cloud and AI development background. However, the next wave, like much of the Voice First shift, will likely come from companies that have not yet started or are in the early stages today.

With the Apple TV, Apple has an obvious platform for the next Voice First system. The current system is significantly hindered as it does not typically talk back in the current TV-centered use case. There is also the Apple Watch with a similar impediment of its limitation in the voice interface — it doesn’t have any ability to talk back. I can see a next version with a useable speaker but centered around a Bluetooth headset for voice playback.

Amazon has a robust head start and has already activated the drive and creativity of the developer community. This inertia will continue and spread across non-Amazon devices. This has already started with Alexa on Raspberry PI [15], the $35 devices originally designed for students to learn to code but have now become the center of many products. Alexa is truly a Voice First system not tied to any hardware. I have developed many applications and hardware solutions using less than $30 worth of hardware on the Raspberry PI Zero platform.
My Raspberry PI Alexa
One of My Raspberry PI Alexa + Echo experimentations with Voice Payments

Viv will, at the onset, be a software only product. Initially it will be accessed via apps that have licensed the technology. I also predict a deep linking into some operating systems. Ultimately if not acquired, Viv is likely to create reference grade devices that may rapidly gain popularity.

The first wave of Voice First devices will likely come from these companies with consumer grade and enterprise grade systems and devices:

  • Apple
  • Microsoft
  • Google
  • IBM
  • Oracle
  • Salesforce
  • Samsung
  • Sony
  • Facebook

Emotional Interfaces

Voice is not the only element of the human experience making a comeback. In my 800 page voice manifesto I assert that emotional intent facial recognition, along with hand and body gestures, will become a critical addition to the integration of the Voice First future. Although just like voice, facial expression decoding sounds like a novelty. Our voices, combined with a decoding in real-time of the 43 muscles that form an array of expressions, can communicate not only deeper meaning but deeper understanding of volition and intent.

Microsoft Emotion API
Microsoft Emotion API showing facial scoring

Microsoft’s Cognitive Services has a number of APIs centered around facial recognition. In particular, the Emotion API [15] is most useful. It is currently limited to a pallet of 8 basic emotions with a scoring system that allows weighting in each category in real-time. I have seen far more advanced systems in development that will track nuanced micro-movements to better understand emotional intent.

The Disappearing Computer And Device

Some argue voice systems will become an appendage to our current devices. These systems are currently present on many devices but they have failed to captivate on a mass scale. Amazon’s Echo captivates because of the dynamics present when Voice First defines a physical space. The room feels occupied by an intelligent presence. It is certain that existing devices will evolve. But it is also certain Voice First will enhance these devices and, in many cases, replace them.
Voice First devices
The growth of Voice First devices in 10 years will rival the growth of tablet sales

What becomes of the device, visual operating system, the app when there is little or no need to touch it? You can see just how disrupted the future is for just about every element of technology and business.

The Accelerating Rate Of Change

We have all rapidly acclimated to the accelerating rate of change this epoch has presented as exampled in the monumental shift from mechanical keyboards of cell phones, as typified by the Blackberry device of 2007 at the apex of its popularity and then shift to typing on a glass touch screen brought about with the iPhone release. All quarters, from the technological sophisticates to the common business user to the typical consumer, said they would never give up the world of the mechanical keyboard [16]. Yet, they did. By 2012, the shift had become so cemented in the direction of glass touch screens and simulated keyboards, no one was going backwards. In ten years, few will remember the tremendous amount of cognitive and mechanical steps we went through just to glean simple answers.

iPhone vs. Blackberry
Typical iPhone vs. Blackberry comparisons in 2008

One Big Computer As One Big Brain

In many ways we have come full circle to the predictions made in the 1960s that there will be no need for more than a few dozen computers in the world. The huge AI self-learning systems like Viv will use the “hive mind” of the crowd. Some predict it will render our local computers little more than “dumb pipes” with a conduit to one or a few very smart computers in the cloud. We will all face the positive and the negative aspects this will bring.

In 2016, we are at the precipice of something grand and historic. Each improvement in the way we interact with computers brought about long term effects nearly impossible to calculate. Each improvement of computer interaction lowered the bar for access to a larger group. Each improvement in the way we interact with computers stripped away the priesthoods, from the 1960s computer scientists on through to today’s data science engineers. Each improvement democratized access to vast storehouses of information and potentially knowledge.

The last 60 years of computing humans were adapting to the computer. The next 60 years the computer will adapt to us. It will be our voices that will lead the way; it will be a revolution.
______
[1] http://www.columbia.edu/cu/computinghistory/fisk.pdf
[2] https://en.wikipedia.org/wiki/Origin_of_speech
[3] http://en.citizendium.org/wiki/Speech_Recognition
[4] https://en.wikipedia.org/wiki/W._Edwards_Deming
[5] http://qr.ae/RUWZH7
[6] https://en.wikipedia.org/wiki/Nuance_Communications
[7] http://qr.ae/RUWSGC
[8] https://en.wikipedia.org/wiki/Amazon_Web_Services
[9] https://en.wikipedia.org/wiki/Amazon.com#Amazon_Prime
[10] https://developer.amazon.com/appsandservices/solutions/alexa/alexa-skills-kit
[11] http://viv.ai
[12] https://en.wikipedia.org/wiki/Ontology_%28information_science%29
[13] https://en.wikipedia.org/wiki/Taxonomy_(general)
[14] https://en.wikipedia.org/wiki/Apple_TV
[15] https://www.microsoft.com/cognitive-services/en-us/emotion-api
[16] http://crackberry.com/top-10-reasons-why-iphone-still-no-blackberry

Tidbits about Bots

The software world is all atwitter about the latest technology to head their way: bots. Touted by some as the next evolution of apps and others as the critical first step in enabling realistic human-to-machine interactions, bots have become the hot new topic du jour. Featured recently by Microsoft at their Build conference, predicted to be further discussed by Facebook at their F8 conference, and already in reasonably wide use by messaging platforms such as Twitter, WeChat, Slack and others, bots are seen as an exciting outgrowth of recent advancements in AI (artificial intelligence).

Conceptually, the idea of an automated electronic assistant that excels at performing specific tasks is certainly an appealing one. Bots that could take the place of those widely despised automated customer service systems we all have had to interact with on the telephone, for example, might be a godsend in comparison. However, most efforts toward more intelligent automated telephony support have been a dismal failure resulting in desperate cries for a human operator.

Some web-based support systems are a bit better but most of the ones that are actually useful are staffed by real people. Now, that’s great for you and me as customers, but expensive and often difficult to scale for companies who use them. So, there’s clearly a business incentive to drive the creation of automated bots that could perform across a variety of different communications mediums and platforms.

But I have to admit I get concerned about how far this could go. For one thing, the idea of certain kinds of bots proactively reaching out to me when I don’t initiate the conversation could get annoying (and time consuming) very quickly. The notion of bot spam is fairly disconcerting and yet, utterly predictable.[pullquote]The notion of bot spam is fairly disconcerting and yet, utterly predictable.”[/pullquote]

More important though, is the question of how many bots people can and would be able to interact with. Right now, much of the discussion around bots seems to suggest they focus on doing one specific task, such as creating a flight reservation or booking a good restaurant with an available opening.

To a point, that’s fine and good. Very quickly, however, it’s not difficult to imagine getting overwhelmed by bot requests and interactions, not all of which will probably even work the same way. Plus, unlike mobile apps, which you consciously have to choose to download and use, interactions with bots could very well be foisted upon us. The mere act of visiting a web site could immediately launch an interaction with a bot.

In some instances, the additional level of support and help which that bot interaction might enable could prove to be very useful but if we already know what we want to do, it could serve as interference and actually slow us down. Obviously, bot designers will need to take these kinds of permutations into consideration as they develop their bots. But as the technology first ramps up, and before some of these lessons are learned, I have a feeling there could be a lot of frustrating bot interactions.

Part of the problem is there will likely be different kinds of bots on different platforms. In theory, of course, any bot should be able to handle normal human interactions, rendering platform differences moot. But the realistic application of technology never really works this way. As a result, either the subtle differences in how bot-type services get deployed across different platforms or the kinds of lock-in strategies various platforms will likely leverage to drive higher usage for themselves are bound to get in the way of our early bot encounters.

As platforms standardize and leaders rise to the surface, these issues could fade away, particularly if companies can create a solution that makes using tens or even hundreds of individual bots feel like natural extensions to a single experience. In the interim however, I expect we’ll see some vigorous new platform battles that will keep our bot interactions from achieving the kind of awe-inspiring wizardry of which they are potentially capable.

Is Retail Hindering Product Innovation?

As I work with tech startup companies to create and manufacture their new hardware products, one of the frequent challenges is the difficulty in meeting their sales expectations. All of the excitement and anticipation during the development and manufacturing phases invariably collide with the harsh reality of the marketplace. “Build it and they will come” rarely works and disappointment invariably follows.

In fact, the design and manufacturing of the product, no matter how complex, is often easier and less costly than creating large sales. And it’s become even more difficult with the changing retail landscape.

There are numerous reasons for this disappointment, starting with the expectation that a company’s new product will be as compelling to others as it is to them. Being so close to their product for so long, they assume others will instantly appreciate all of its benefits. That rarely happens. The word needs to get out; the new product’s value needs to be explained, compared and justified. The buying process is not instant and takes considerable time.

For a new consumer product to succeed, it needs to reach a wide and receptive audience. While it begins with a strong PR campaign and online reviews, that’s rarely enough to sustain sales. Ultimately, the product needs to get in front of customers so they can see it, touch it, and experience it.

It used to be much easier to do that than it is today, with a large number of retail stores and the substantial customer traffic roaming the aisles, looking for something new. In my case, I used to visit electronics stores a few times a month but rarely do so anymore.

It’s become less appealing and more difficult for product companies to get into retail. With the shrinking number of stores and falling profits, there’s less shelf space to display new products. Lower profits mean less capable sales people and fewer sales.

In many instances, retailers no longer buy a product but, in reality, take it on consignment. They don’t tell you that, but that’s usually the way it works out. They’ll send the product back if it doesn’t sell and will often delay payments for 90 or 120 days. Some retailers are well known for shorting the returns to eke out some more profit.

Even if a company gets their products onto the shelves, there’s no guarantee of success. The company needs to create demand to pull the products through. It’s all about sell-through, not sell in.

With the retail channel becoming so difficult, many companies need to sell their product on their own website, Amazon, or through Kickstarter or Indiegogo, supplemented by online advertising. Amazon has become the go-to replacement for major retailers, at least at the onset. It’s easy to get a product onto Amazon and the initial user feedback in reviews can often help jumpstart sales. Amazon also provides useful sell-through data and offers an assortment of marketing services. But in my experience, rarely can Amazon or any online merchant match a strong retail presence. Retail presence catches eyeballs and creates awareness, unlike Amazon, which first requires that you be alerted to the product’s existence.

One company I worked with thought they were doing well on Amazon until their product was featured on an end cap display at Target. In one weekend they sold more than they had in six months on Amazon. Getting a product into a chain with 1500 stores and selling just 2 items a week per store amounts to 180,000 units a year.

While Kickstarter and Indiegogo were originally established to fund the development of new products, increasingly these sites are being used simply to pre-sell products and get paid in advance. That reduces the cash flow demands of building hardware, contrasted with the demands of the slow paying retail channel but it doesn’t necessarily translate into sales beyond the initial buyers.

So this is the challenge: It’s becoming easier than ever to build new products, yet it’s becoming more difficult to sell large volumes, as retailers are shrinking. In fact, the retail channel is becoming the major hindrance to a product’s success and a deterrent to product innovation.