The Emergence of Purpose-Built IoT Networks

One of the ‘next big things’ in the mobile landscape is going to be the Internet of Things (IoT) – the billions of devices that will be connected to the internet in the coming years. Major categories of IoT devices include the connected home, automotive/telematics, industrial, smart cities, healthcare, and transportation. After several years of analysts talking about and forecasting IoT, the market is starting to become real. Module prices have fallen to under $10 in some cases. Enterprise CIOs are starting to invest in IoT projects.

Most of the major mobile network operators are putting significant resources into developing an IoT business. It isn’t as sexy as the next iPhone or virtual reality – more of a series of base hits than triples or home runs. But Verizon has said it is on track to reach about $1 billion in revenues in 2016 and has acquired three IoT-related companies this year. AT&T reported, in the second quarter of 2016, there are 29 million connected devices on its network (non-smartphones or tablets).

One of the inhibitors to more rapid IoT growth has been the lack of the right type of network to connect these billions of devices. Many of the devices or sensors have different connectivity requirements than the typical smartphone, tablet, or connected car. They need a network that will support low power requirements (batteries that last years, not days), have wide area and strong in-building coverage, and support relatively low and/or bursty data speeds. Historically, legacy 2G networks (remember GSM?) have supported some IoT devices. But the operators are slowly retiring these networks so they can refarm the spectrum to meet the demand of bandwidth-hungry smartphone users. And the typical LTE network isn’t really suited for many types of IoT devices: requiring too much power, not having adequate reach, and costing too much for a sensor that might only consume a few kilobytes a day.

Fortunately, help is on the way. We are seeing the emergence of purpose-built networks for IoT, called [wait for the really unwieldy marketing name] Low Power Wide Area Networks (LPWANs). LPWANs are intended for IoT solutions that need low power consumption (under 1 MB per day), extended battery life (5-10 years), long range (10km or more), and good penetration in buildings and underground. They are an alternative to wide area network technologies (cellular) and to short range networks (Wi-Fi, ZigBee).

But, like most things in tech, it’s complicated. During 2016, we have seen the launch of three types of LPWANs in the United States, all using the unlicensed band (like Wi-Fi), but each employing a different standard and business framework:

LoRa is an emerging standard for LPWANs and has attracted a fairly diverse group of tech sector players. In the United States, LoRa uses the 915 MHz band. A company called Senet is building a LoRaWAN network in the US.
Ingenu is a vertically integrated company that uses its own technology, called RPMA, in the unlicensed 2.4 GHz band, branded The Machine Network.
Sigfox has emerged as a rival to LoRaWAN. The company has raised some $300 million and operates several IoT networks in Europe. It has just started building in the US and also uses the unlicensed 900 MHz band.

Mobile Ecosystem estimates that, as of the end of Q3 2016, LPWANs covered 50-100m POPs in the US, between Ingenu, Sigfox, and LoRa (some overlapping). We predict the number of markets will at least double in 2017. Building these networks isn’t like building cellular: these companies say they can cover a city using only 20-30 towers.

If your head isn’t already spinning, just wait. Mobile network operators also plan to build LPWAN networks, but using the licensed spectrum they own. The first phase, LTE Category M1 (LTE-M), could launch in 2017. A second phase, called NB-IoT, will follow LTE-M, featuring greater range and the ability to accommodate devices with even lower power and transmission requirements.

AT&T says it is testing LTE-M this year. Verizon says it will be commercially available by the end of 2016 in some markets, with a broader launch in 2017. T-Mobile has not announced LTE-M plans but is seeking to capitalize on AT&T’s sunsetting of 2G by aggressively marketing its 2G network for IoT, encouraging ‘stranded’ A&T customers to switch to T-Mobile and enjoy free 2G services until the end of the year. Sprint hasn’t said much but is rumored to be considering a LoRa based deployment (its parent company, SoftBank, is deploying a LoRA network in Japan).

The deployment model for LTE-M is quite different than the unlicensed LPWAN crowd. Mobile operator IoT networks can be deployed relatively quickly (requiring only a software upgrade to existing radio equipment) but it will cost some $15,000-25,000 per base station.

This is all good for growth in the IoT sector but one can already sense a bubble brewing. It’s hard to see how the market can support three different types of unlicensed networks plus the approaching LTE-M networks from the mobile operators. Enterprises considering large scale deployments aren’t going to want myriad devices operating in different bands.

We can already see cracks in the plans. Sigfox splashily announced in May it plans to reach 100 cities in the US by the end of the year but does not appear anywhere close to achieving this goal. Ingenu continues to build out in the US but appears equally focused in licensing its RPMA technology in other geographies. LoRa in the US is relying on venture-backed Senet and a couple of smaller players. Comcast is testing LoRA under the brand MachineQ. All this activity in the unlicensed band will all shake out over the next year or so, depending on how extensively the cellular operators build out LTE-M and how aggressively they try able to sell it.

The good news, however, is the stars are starting to align for IoT: falling module prices; enterprises committing to larger scale deployments; a growing and diversifying supplier community; and the rollout of purpose-built networks and more suitable business framework to connect these billions of things.

A Dozen Acquisition Targets for Big Tech Companies

A little over two years ago, I wrote a post suggesting several companies Apple, Google, and Microsoft might want to acquire. As I thought about that post this week, I planned to revisit it in a similar format but then decided to approach things from a different angle. So here is a list of businesses I think would make interesting acquisition targets for the major consumer tech companies, in several major categories.

Hardware

The most interesting acquisition targets in the hardware space are those that seem to have cracked a niche but are struggling to grow beyond it and would benefit from being part of a larger ecosystem. The two that come readily to mind are Fitbit and GoPro, both of which I identified around a year ago as one-trick pony consumer technology companies who would likely struggle to find long-term success as standalone businesses. Either would now make an interesting acquisition target for the right consumer tech acquirer but, of course, the big question is who. I’ve often felt the most obvious acquirer is one of the two big camera companies, Canon or Nikon. But I think there’s also potential for Samsung to jump in. The fit (no pun intended) for Fitbit is less obvious but again, Samsung or one of the other big multi-category consumer electronics vendors seems the most likely bet. Both Fitbit and GoPro have done well, to a point, but now seem to be at something of a crossroads.

Conversely, there are those companies that seem to have peaked and are now more clearly on a downward slope. Jawbone was one of the companies I mentioned in that earlier piece and it has seemed to struggle recently. It would likely be a bargain for an acquirer interested in the audio or fitness space (or both). A slightly more long-shot bet is Nintendo, which has occasionally been suggested as a target for Apple, and which has also been struggling quite a bit, though the recent success of Pokemon Go has raised hopes of a comeback. Apple might still be an interesting prospect, but either Microsoft or Sony or a content conglomerate might be able to do something interesting with all the technology and IP Nintendo still owns. We might add HTC to this list of hardware companies past their prime too. The Vive VR business would be an interesting asset even as much of the rest becomes attractive.

Apps and content

This is probably the broadest category here and there’s no shortage of potential acquisitions. The companies in this sector run the gamut from subscription content providers to one-off app makers and across a number of different domains. Netflix is a perennial subject of acquisition rumors but is now getting to a size where the number of potential acquirers is rapidly dwindling – I’ve suggested Apple as a potential acquirer in the past, at least somewhat seriously, but that remains a true long shot. Also in the content space is Spotify, which I mentioned in my piece two years ago as a potential acquisition for Google. It’s heading towards an IPO but the other possibility is an exit by acquisition and I’d say Google has to be the most likely candidate, though Microsoft is another intriguing possibility. The latter hasn’t been afraid to make productivity-centric acquisitions in the consumer market and has largely failed to create content businesses beyond gaming. This would be a big leap forward in that domain.

In the one-off app space are such diverse options as Pinterest (which I suggested as an acquisition target for Google two years ago); Musical.ly, which remains one of the most under-appreciated apps outside of its target demographic; and the Kik messaging app. Pinterest would still be an interesting addition for either Google or Amazon, as either an advertising or e-commerce bolt-on to their existing businesses. Amazon in particular has been willing to buy smaller businesses in adjacent spaces and continue run them independently under their own brands – Audible, GoodReads, IMDB, and Zappos are all existing examples and Pinterest could follow that model while benefitting from some integration behind the scenes. Musical.ly seems almost certain to be snapped up eventually by one of the big social networking or online advertising companies. And Kik is the rare example of an independent messaging app with a big user base.

Car technology

Samsung’s recent announcement of its intent to buy Harman International will likely create further interest among big technology companies in the automotive industry. Harman was a somewhat unique asset here, in that it combined significant market share with a relatively focused scope. It promises particularly good synergies with the rest of the Samsung business. BlackBerry, which acquired former Harman subsidiary QNX six years ago, makes for an intriguing prospect. The handset baggage is minimal at this point, now the company has finally made the hard decision to discontinue making its own devices, so it’s largely a software and services company. Microsoft would be an obvious buyer, though the QNX part would likely raise antitrust concerns given the two companies are the dominant players in car operating systems. An Apple-BlackBerry marriage has always seemed particularly unlikely but is perhaps less so now, while Google would make another interesting buyer. TomTom is another interesting car-related asset which remains independent even as much of its competition has become part of bigger businesses. Apple relies heavily on TomTom for mapping, though it’s building up its own assets in some markets. It would perhaps be the most likely buyer at this point, especially if it wants to get serious about self-driving cars.

Others

There are, of course, plenty of others I could list here, including some from the earlier piece and relative newcomers like Magic Leap. It’s striking that only one of the companies on my list from two years ago has actually changed hands while several remain interesting prospects for acquisition. But I wouldn’t be surprised if a higher percentage of the companies I’ve listed here end up being bought over the next two years.

Why Apple Needs to take Aim at Their Core Customers

On the second day Steve Jobs came back to lead Apple in 1997, I had a chance to meet with him and ask how he planned to revive and save Apple. Apple was $1 billion in the red and we now know they were about 6-8 weeks from possibly going under. He did not hesitate to tell me he had two key initiatives to bring Apple back to health.

The first thing he told me was he was going to go back and take care of the needs of his core customers. He defined these customers as the creative types who loved the Mac as well as engineers, programmers, publishers, and ad agencies. Indeed, these were the users who put the Mac on the map when it was first released in 1984. Jobs felt that, in the time he had been gone, past Apple CEOs had forgotten about these customers as they tried to expand the Mac’s reach in the marketplace.

The second thing he said he would do would be to focus on industrial design. Even then, Jobs saw something none of us did at the time. He started Apple down a path towards making design a cornerstone of all Apple’s future products.

But it was his first initiative that has been coming back to me a lot these days as have I read multiple stories that suggest Apple has been too slow to upgrade products and be more innovative on the Mac, especially when it comes to meeting the needs of their core customers. Various articles suggest Microsoft, particularly with its Surface tablet PCs and their new desktop Surface Studio is now the leading innovator in developing products for the creative professionals and they are starting to steal Apple’s core customers.

Over Thanksgiving, I was told of a person who had been a major Apple devotee and was a serious creative professional. This person decided to buy a high-end Windows machine, adding key processors and components to it. They said the renderings they were doing took considerably less time than it did on their Mac Pro. Consequently, his entire team bought these new modified Windows machines and sidelined the Mac Pros.

This may be an isolated case. I have also talked to other high-end creative types and, given their significant investments in software and hardware designed around Apple products, I just can’t see them ever jumping over to Windows. However, the fact this one creative pro was able to upgrade a Windows machine to deliver more power for faster rendering of their work is not something Apple can ignore.

The one complaint that seems common is it takes Apple too long to bring out new MacBook Pros and Mac Pros to keep up with the growing needs of the creative professionals. This is not necessarily Apple’s fault. They rely on the processor upgrade cycles Intel has on next generation CPUs and especially ones that would meet Apple’s design and power criteria. But it did take them 14 months to bring out a new MacBook Pro, something that has caused frustration from their creative community of users.

I believe Apple still has the creative community high in their focus. Although, to be honest, the products for this class of users are more like trucks than sedans. When Steve Jobs introduced the iPad in 2010, he said PCs were like trucks, designed for specific uses, but the iPad was more like a car and where the largest growth in users would be. Although I believe Apple will always make Mac Pros, MacBook Pros, and MacBooks (representing around $20 billion of their current revenue), I do think that, over time, they would like to see more and more people transition to an iPad Pro and iOS as it has the best link to their services business, which is a huge growth segment for them.

Regardless of Apple’s long term strategy, I do think Steve Jobs’ goal to keep their core customers happy needs to be top of mind for Apple. I also think they probably do need to be quicker in innovating around the Mac Pro platform as it is clear Microsoft has these same customers on their radar and would love to steal them from Apple if they can. While this market is small, the products for these customers have high margins and is still a very lucrative product line for Apple. I don’t think they want to give up any ground to Microsoft if possible and I do expect them to continue to make the MacBook Pro and Mac Pro the best of class tools for the creative community.

Will Amazon Silence Alexa with a Screen?

According to Bloomberg, Amazon is developing a high-end Echo-like device which will feature a better speaker and a seven-inch touchscreen. The speaker is said to be larger and tilt upwards so the screen can be visible when on a shelf or counter and the user is standing. The Wall Street Journal reported earlier this year that Amazon’s Lab126 hardware unit was working on an Alexa-powered device featuring a tablet-like computer screen known internally as “Knight.” The device will be running a version of Fire OS.

The temptation of adding a screen

The people familiar with the product who talked to Bloomberg said the screen will make it easier to access content such as weather forecasts, calendar appointments, and news. It might just be me but I struggle to see this as a solid business driver. The great advantage of using Alexa for my morning briefing is that I can listen to it while I get breakfast ready or pack my daughter’s lunchbox. I would not have time to stop and read or even look at something. Also, Alexa’s voice travels so well across the room over the morning chaos, a screen would have me move close to it to be able to look at it.

I cannot help but think the main task a screen will help with, when it comes to Alexa, is shopping. If I am trying to buy furniture, clothes, gifts, being able to see them is a huge improvement vs. Alexa just calling out the description of the item.

Having a screen could, of course, also help with content and allow Amazon to enrich some of the experiences by adding a visual output to the voice. Music is a good example of this. But the question is whether Amazon needs to add that screen to Echo.

While a screen could add to the overall experience, I strongly believe it should not be an alternative input mechanism. Adding touch to voice would weaken Alexa in an environment where consumers feel very comfortable using their voice. As voice-first is not yet an entrenched behavior, giving an alternative would slow down adoption and negate the considerable progress Amazon has made in this area.

Leveraging Existing Screens vs. Adding a New One

There are plenty of screens we have in the home Alexa could leverage — some might even be “controlled” by Amazon, like a Fire TV or tablet. Others could be exploited by the Alexa app, like our phones. If our interactions with Alexia remain voice-first/only, the screen would be a simple display with no need to interact with it. This would make the Fire tv the perfect companion for Alexa.

The risk of adding touch is, even if Amazon does not intend it as an alternative input mechanism, consumers at this initial market adoption stage might easily revert to old habits. In a way, this reminds me of how people, at the beginning of the tablet market, bought a keyboard to use with their tablets so they could revert to a user experience they had experienced for so long with PCs and that felt familiar and safe.

Over time, as AI continues to develop, I could see a role for a device that intelligently understands what is appropriate to show on the screen and proactively does that by having Alexa suggest, “Do you want to visualize it?” or saying, “let me show you.” There are instances where displaying the content seems easier than an alternative solution. Recipes are often used as an example to illustrate how voice-only does not work. Yet, if you had an app that lets Alexa break down the steps so you could literally have her coach you through the recipe and check, “Ready?” or “Tell me when you are ready”, you would not need to visualize the steps.

The Risk of Turning Alexa from Leading Actress into a Supporting Role

Echo was successful because people bought it for what it was: a speaker with a digital assistant. Actually, a digital assistant in a speaker would be a better description of what consumers were buying. Users did not have other options but to talk to Alexa to get her to do anything. There was no old behavior to revert to.

Ironically, Alexa being trapped in the little cylinder allowed her to be free. Free of any limitations that being part of a more traditional device, such as a smartphone or a tablet, would have imposed on her. Trying to turn Echo into a glorified Fire tablet could demote Alexa to a mere feature vs. the genie in the bottle she is known for. For people who bought Echo, there was nothing else the device could do other than allowing them to interact with Alexa.

Amazon needs to penetrate our homes more as well as expand beyond them to grow engagement but this needs to be done in a way that leaves consumers deeply connected with Alexa so their reliance feeds their loyalty. Voice needs to remain the main input as this is ultimately how our assistant will become personal.

While competition in this space is growing, the battle will not be won by adding features that, while differentiating in looks, weaken the core experience. Accelerating Alexa integration with other devices, continuing to expand her skills, and improving her knowledge will help to stay ahead of the curve and keep users engaged and loyal.

DirecTV Now Highlights the Challenges of US TV

On Monday afternoon, AT&T finally announced its DirecTV Now service. It has been talking about this in detail since earlier this spring and for even longer in a more general sense. The service launches on Wednesday and marks the latest entrant in the online Pay-TV replacement market though, of course, it hasn’t been marketed that way. But the service, its structure and, more importantly, its limitations, highlight the challenges associated with operating in the US pay TV market.

What consumers want is straightforward

When it comes to TV, what consumers want is straightforward: they want to watch what they want, when they want, where they want, preferably without ads, and they want to pay as little for it as possible. The last twenty years have seen many attempts to give us these things, starting with boxes (VCRs and DVRs, SlingBox and so on) and then moving on to service structure and cloud offerings to achieve the same objectives. Netflix and HBO have each, in their own way, demonstrated the power of “no ads” while a variety of online services have shown how compelling an entirely on-demand world can be. But we still haven’t seen all these elements combined into a single package. Every service offers only a subset – great interface with no ads but a limited amount of content; or tons of content, but with ads and a crummy interface. Every new service to launch holds out the hope someone will finally crack all this but they always fall short in one or more ways.

What’s not there is as important as what is

Many of the headlines in the weeks leading up to the launch focused on two numbers – 100 channels and $35. However, it’s now clear these numbers don’t really go together, at least in the long term. AT&T’s $35 package has “60+” channels, not 100. The 100-channel package will cost $60 over the long term (though there will be a promotion offering the $35/100 channel combination at launch for a limited time). One of the most frustrating things about following the event was AT&T didn’t announce the exact channels – the focus was all on the number of channels in each package. I suspect that was deliberate: it’s what consumers are used to and it stops people complaining one or another channel isn’t included.

Yet that’s exactly where the focus has inevitably landed as details on the contents of the various packages has trickled out and as more of the asterisks on the service have become clear. So far, we know of the following limitations on the service as a whole:

  • CBS and Showtime are entirely absent for now
  • The service won’t work on Roku devices at the outset
  • Subscribers won’t be able to watch NFL games on their phones
  • There will be no DVR functionality
  • Local ABC, Fox, and NBC stations will only be available where those companies own and operate the local stations.

There’s also still the age-old issue of forced bundling – yes, there are several tiers here but it’s still far from a la carte. In short, though the promise is some version of the “watch what you want, when you want, where you want” story, there will indeed be limitations on what, where, and when you can watch. And because there’s no DVR, you won’t be able to skip the ads. Much of this, of course, comes down to two things: the complicated structure of the US market and AT&T’s mixed incentives as it launches this new product.

The impact on the market

The big thing DirecTV Now has going for it is it comes from one of the big names already in the pay TV business, under its own brand, that will also be tied to the AT&T brand. Yes, DISH has launched its Sling TV service, but that operates under a separate, unfamiliar, brand. Sony’s Playstation Vue service has been limited until very recently to Sony’s own devices. Hulu, YouTube, Amazon, and others are all supposedly working on their own offerings here too, but none have launched yet. I suspect all that means DirecTV Now will do very well in the market, especially with its promotional pricing and its tie-in with Apple TV and Amazon Fire TV Stick hardware. Customers will be able to buy the service and the hardware needed to consume it at very competitive pricing bundles and they’ll be able to do it in AT&T stores.

The zero-rating of DirecTV Now content on the AT&T network will also likely help a little, though with T-Mobile offering unlimited throttled video and Sprint offering unlimited data services, that value proposition probably isn’t as strong as it could be. We’ve also yet to see any specifics around the bundling of AT&T wireless service with DirecTV Now. Overall, though, this service is likely to do well, especially compared to the existing virtual MVPDs in the market, though I suspect there may well be some customer backlash as some of the limitations become clear. The good news is those customers will be able to cancel anytime and won’t be locked in as they would traditionally have been to a pay TV provider.

From a content provider perspective, despite optimism in some quarters about finding new pay TV customers, I suspect the real impact will be a further shift of big traditional bundles from the existing providers towards these skinnier bundles, as people finally sense a way out of paying far too much for channels they don’t use. The most telling statistic in the industry is always that the average customer gets 200 channels in their package, but only watches about 20. People are crying out for smaller packages and now, they’re finally starting to get them. But they’re also still crying out for that vision of the content they want on their terms and this new offering from AT&T won’t slake that thirst just yet.

The Demise of the Others

I’d like to offer an interesting observation. As I look back over the past annual shipments of the PC, smartphone, and tablet categories, an interesting pattern appears. Take a look at this chart:

img_0140

(Click to enlarge. I’ve highlighted the “other” category in the chart)

I like to keep track of how hardware brands sell within any particular category. Each category of PCs, smartphones, and tablets have their leading brands that absorb most of the volume. While we can list the estimates by quarter for every major brand, we tend only to break out individual brands that are the class leaders and group everyone else into a category called “Others”. This group can consist of a name brand that simply doesn’t sell in high volume but it also includes any number white label or upstart brands trying to capitalize on the potential S-curve of growth.

The observation I’d like to point out is, at the start of each new category, major name brands own the largest percentage of volume. These name brands are primarily responsible for creating the segment since they already have some established brand trust with consumers. Once the new segment begins to grow, non-name brands start to flood the market. The Chinese tech manufacturing scene is key to this phase of the market as they make it cost effective for nearly anyone to slap a brand on a piece of hardware and try to compete in new categories. These new brands attempt to ride the growth wave of the category but then, something interesting happens. At about the point in time when the market for the specific segment matures, the volume and percentage of total shipments for others begin to decline. The market slowly consolidates and often comes back to the brands who were there from the start or emerged out of the others category to be a recognizable brand.

In each of the three categories I charted, we see this pattern play out. Name brands dominate the majority share of products shipped. Once the category starts to accelerate, a flood of brand upstarts begins to enter the market. When we isolate the brands who ship under 10 million units per quarter and add them to the others section in each of the three categories mentioned, we see this group often make up 40-50% of all devices shipped at the peak of the cycle. Sometimes this is simply a gold rush but, in some cases, companies are making a valid attempt to become a name brand. As we enter the post-peak, more mature stage of the cycle, we seeing the decline of the “other” category as brands begin to reabsorb the bulk of quarterly shipments.

This observation further deepens my conviction that the single most important thing any technology company can do to help ensure a long life in the industry is to establish a strong brand. In an era where the perception of low-end disruption and good enough products has led to many false assumptions, I’d argue a strong brand is one of the most powerful defenses against that disruption.

The competitive dynamic in each new technology category becomes fascinating to watch as brands look to fend off upstarts who flood the market and offer lower pricing or a differentiated service to create a brand. In either case, the longer the company can compete and sustain in any given category, the more likely they are to be among the winners once the category consolidates back around brands when the category is mature and extends into post maturity. While the PC categories lifecycle allowed those in the other category more time to establish themselves, the smartphone and tablet category did not. Companies looking to emerge as a brand from the other category in both smartphones and tablets had roughly 3-4 years to accomplish it. Most failed.

The last thing to add is how this dynamic can be true of any brand, including those who don’t start off in hardware, but eventually get into hardware once the category consolidates to brands. Microsoft, for example, is starting to become a genuine threat to other PC makers even though they are relatively new to the category. They established their brand in software and services, not hardware, but then entered the hardware market at the time it was consolidating around brands. Google is similarly attempting this with the Pixel by entering the smartphone segment only after their brand was established in something other than hardware. Both Amazon and Snapchat are employing similar strategies by entering hardware categories only after building a strong brand and customer loyalty.

I believe this pattern will play out in every major category we observe. We are starting to see it in wearables, we will see it in AR/VR, and anything else that comes along in the future.

The moral of the story is a brand is critical. Whether a company starts in hardware, software, or services and then tries to enter a hardware category, the most important strategic thing they can do is establish a strong brand. If successful, the number of options open to them in the future is plentiful.

The Tech industry and the Search for a Cancer Cure

As someone who has tracked the tech market for the past 35 years, there is one major theme I have seen over and over again when it comes to the goals of many tech innovators. They believe and have faith that the technology they create has the possibility to change the world. I have frequently heard tech executives say how they think their inventions or technology are world changing devices or services.

From a historical perspective, that is very true. Technologies like the Gutenberg Press, the Steam Engine, Edison’s light bulbs, Alexander Graham Bell’s telephone or more recent inventions like the semiconductor, PC, and smartphones have indeed been world-changing in what they do and how they drive new industries and the world’s economies.

Steve Jobs was one of the most vocal on this topic and, in many speeches, he talked about Apple’s goal to change the world. Some of his products, especially the iPod, iPhone and the iPad have been world-changing devices in terms of how they expanded personal computing, communications, and entertainment. Products like Facebook and Twitter have had a huge impact on connecting people around the world in ways we could not have imagined even 10 years ago.

However, I have been wondering if Silicon Valley, with its innovative thinkers and problem-solving skills, took a stronger aim at some of the huge problems we have in healthcare and especially in finding cures for diseases like cancer, diabetes, and other major illnesses, how this could impact the fight against life-threatening problems.

I think most of us either know of people who have had cancer or have it themselves and surely want a cure for this awful disease. Vice President Joe Biden’s son died from cancer and he has devoted his life to what he calls a “moonshot” to try and find a cure. There has been great work and serious strides in the world of health science done to deal with cancer but, even with these advances, there is still no actual cure.

It turns out, Silicon Valley has been pretty active already and I recently found out about one of the newest initiatives of a major Silicon Valley company called NVIDIA who, along with key government and private organizations, has made finding a cure for cancer a high priority.

NVIDIA recently announced it is teaming up with the National Cancer Institute, the US Department of Energy (DOE) and several national laboratories on an initiative to accelerate cancer research. The research efforts include a focus on building an AI framework called CANDLE (Cancer Distributed Learning Environment), which will provide a common discovery platform that brings the power of AI to the fight against cancer. CANDLE will be the first AI framework designed to change the way we understand cancer, providing data scientists around the world with a powerful tool against this disease.

One of NVIDIA’s claims to fame is their incredible Graphical Processors (GPU) that help power some of the fastest supercomputers in the world. These processors are also at the heart of NVIDIA’s major push around something called Artificial Intelligence Deep Learning. These processors can handle billions of transactions per second and are central to a new data science technology used to mine data at its deepest levels and use AI and deep learning to try to find answers to big problems.

I have known NVIDIA’s founder and CEO Jen-Hsun Huang for 15 years and he is one of the most energetic and visionary leaders in Silicon Valley. He is very passionate about deep learning and its potential impact on our world. “GPU deep learning has given us a new tool to tackle grand challenges that have, up to now, been too complex for even the most powerful supercomputers,” he said. “Together with the Department of Energy and the National Cancer Institute, we are creating an AI supercomputing platform for cancer research. This ambitious collaboration is a giant leap in accelerating one of our nation’s greatest undertakings, the fight against cancer.”

The cancer moonshot strategic computing partnership between the DOE and NCI to accelerate precision oncology includes three pilot projects that aim to provide a better understanding of how cancer grows; discover more effective, less toxic therapies than existing ones; and understand key drivers of their effectiveness outside the clinical trial setting, at the population level. Deep learning techniques are essential for each of these projects.


NVIDIA is not the only tech company taking aim at the cancer “moonshot.” IBM’s Watson has joined with the Veterans Affairs to launch a public/private partnership to provide veterans who have cancer a better chance for recovery. Watson is the supercomputer that won Jeopardy and is one of the most powerful AI-based computers in the world.

One of Silicon Valley’s giants, Intel, has invested heavily in AI and deep learning research and is creating a new AI framework around their most powerful processors which will help power some of the biggest data projects in the world. In terms of cancer research, Intel has teamed up with the Oregon Health and Science Institute-Knight Cancer Center, the Ontario Institute of Cancer Research, and the Dana Farber Cancer Institute to create a collaborative “cancer database” they will use to help advance the research on finding better ways to treat cancer as well as find a cure someday.

Given Silicon Valley’s quest to change the world and its immense problem-solving skills, having the Valley turn their technology and skills to target these diseases perhaps can help speed up the search for treatments and, ultimately, cures for cancer, diabetes and other major health maladies.

Sleeping with the Enemy Would Benefit Both Microsoft & Apple

Because of what I do, I try different devices all the time. While I have used Windows 10 PCs since they became available, I never made one my main working device. For the past nine years, my main PC has been a Mac with the 12” MacBook as my latest device. Last week, I received a Surface Book with Performance Base and, after setting it up, I decided to try and make the switch.

I was particularly interested in understanding how, as a user, I could continue to benefit from the Apple ecosystem even if I did not have a Mac. Also, what is the opportunity Microsoft has to deliver the best Windows 10 + iOS experience. This is important because there are more iOS users with a PC than there are iOS users with a Mac. So it offers an opportunity for both companies to improve the cross-platform experience. While there might be an opportunity for Apple to convert a few of those PC users, the great majority are comfortable right where they are. Offering an easier cross-platform experience between iOS and Windows 10 as a differentiator for Surface would clearly benefit Microsoft.

Hardware and Windows 10 are the Easy Part

As I prepared to transition to Surface Book, there were specific aspects of my workflow I needed to address.

The hardware was not a problem. I love the keypad. I spend a lot of my day typing and I was not a fan of the keypad on the 12” MacBook. Typing on the Surface Book is extremely rewarding. The mousepad is a little more sensitive than the one on the MacBook but it did not take long to get used to it. The Surface Book’s fan kicks in often and it is quite loud which was a bit of a distraction at first. The quality of the screen is great but I did not find myself touching it very much other than with the pen to write quick notes.

I am not new to Windows 10 so transitioning was not an issue. The most annoying thing was trying to paste using the equivalent of Command-V which obviously did not work. Mac users are very different and many use their systems in a much deeper way than I do so I do not intend to speak for them. If you already use Office on the Mac, your transition will be much easier. If your documents are all in iCloud, your transition will also be easier. When I joined Creative Strategies back in April, I moved to the cloud and my multi-device life became so much smoother. Once I got on the OS X Sierra Beta, things got even better as all the files I am working on are saved to the iCloud Desktop automatically, making my ‘grab and go’ routine more accessible. Something else that changed back in April is I now only travel with my 9.7” iPad Pro. Not having to think if I have all the files I need was extremely liberating. I downloaded iCloud for Windows on my Surface Book and all my work was easily accessed. Pages, Numbers, and Keynote were also fine to use although Numbers documents missed a few functionalities and Keynote presentations had some font issues.

While the files were not an issue, remembering all the passwords for all the websites I use certainly was annoying but, of course, that is something you only do once.

I was concerned about my Apple Watch not being able to unlock my PC but Windows Hello on Surface Book was seamless. I sat down at my desk and the Surface Book was unlocked. It felt like there was no password set up in the first place.

What it All Boils Down to: iMessage and Apps

In the end, what I really struggled with were two things that had nothing to do with the OS per se or the physical device.

I use iMessage a lot during my day and, while my iPhone is always next to me, I have become accustomed to using it on my Mac. I do this because it is more convenient using a full keyboard to type but mostly because it feels more part of whatever I am doing. It remains more central to my workflow rather than a side conversation on the phone.

The other big part of my day is Twitter and the client on Windows is just painful. I asked input from my followers but the sad answer was a validation of my pain. Using Tweetdeck over the browser was far from perfect as, more often than not, I would accidentally close the window. So, as with iMessage, I resorted to having my iPad open next to the Surface Book which really impacted my workflow.

iMessage for Windows

Why does Apple do that, you ask? Because it cements iOS users even more into iMessage vs. having them look for other apps that could have them disengage from iOS. I am not advocating Apple replicate all the features iMessage has on iPhone. There are features that are not unique. So, for instance, keep invisible ink for iPhone and allow stickers. Apple has much to gain here, contrary to what it would be if it put iMessage on Android. iMessage for Windows is about recognizing not all their iOS users will be Mac users and allowing them to still get the best experience from iOS. Opening iMessage to Android will not really do much as far as driving churn and there are plenty of other apps that go cross-platform in phones that leave users with plenty of choice.

With Windows launching People with third party app plugins, it would be a perfect time for iMessage to be included.

More App investment

Microsoft has options to both improve Windows and differentiate Surface. There are steps Microsoft can take in engaging with developers more to get apps to Windows. Even without a phone business to worry about, the Windows 10 environment is behind. There are two sides of this equation. One speaks to the creators Microsoft is focusing on for the next software update and one speaks to consumers who are still very engaged with their PCs and, therefore, want a rich experience. With Apple’s new MacBook Pro on the market and the eagerness to prove the Touch Bar is the right approach for touch on a Mac, I expect Apple to make developer engagement a top priority. Microsoft needs to do the same for the platform but also should step up efforts in first party apps both for Windows and Surface. So, if Twitter is not interested in improving its app, why is Microsoft not building one?

There are other things the Windows Devices team could do for Surface like creating apps that help with content transfer for those people who are not already in the cloud.

Think Beyond Devices and Platform

What my experience made crystal clear is both Apple and Microsoft need to think beyond devices and the OS and think about the whole ecosystem and their ultimate goals.

If Apple is serious about shifting more revenue to services, why not take Apple Music out of iTunes and make is a standalone app? I have not used iTunes in years as, whenever I get a new device, all my backups are in the cloud. Having to use iTunes to play my music on a Mac or download iTunes to the Surface Book seems a very unnecessary step. While I am sure people still buy music, I would bet they are less likely to do it if they subscribe to Apple Music. Even if they did, a simple link to the store would be all they need.

For Microsoft, it is about recognizing that, whether in the consumer space or the enterprise one, Surface buyers are more likely than not to have an iPhone and be entrenched into iOS. Embracing what they are attached to, rather than forcing them to use other tools, would benefit engagement (although it might not benefit a specific service). One Drive is a good example. While it was possible for me to access iCloud, there were more steps to take when wanting to save documents as the default was either One Drive or DropBox.

Together Against Google

Both companies need to also realize facilitating this Windows + iOS world will help limit the risk of Google taking advantage of the weaknesses and grabbing users. Again, this is not about devices. I do not expect Google to win consumers and enterprises with Chromebooks and Android tablets. This is about the much bigger battles: Digital Assistants and AI. Google has always been very good at using its device-agnostic approach to its advantage. Google Maps, Google Photos and now Allo are great examples of the extent Google goes to make sure it reaches valuable customers on other platforms. It is about time Apple and Microsoft started to play the same game.

The Wireless Industry is Changing Before Our Eyes

There are moments in time where one can sense important shifts going on in an industry. I think now is one of those times in wireless. First, some historical context – what have been the other ‘big shift’ moments?

Introduction of the portable phone (early 1990s) – made this the ‘mobile’ industry, not the ‘car phone’ industry
Move from analog to digital (mid 1990s)
Introduction of AT&T Digital One Rate – all but eliminated domestic LD and roaming
First popular smartphones (mid-2000s) – starting with Blackberry and Palm Treo, culminating in the introduction of the iPhone in 2007
Apple launching the App Store (2008)
Launch of LTE (2011) – the first real ‘mobile broadband’ network

This particular shift is different, in that it is not rooted in the introduction of a signature new product or service or a major technical advance. On the surface, wireless looks like business as usual: the carriers are still raking in the dollars; the latest iPhones haven’t wowed but are still selling well; and the industry has started on a path toward the ‘Next G’.
So you have to read between the lines to see a developing trend. Just look at what has happened in 2016:

Major operator moves. The mobile operators have realized growth in core wireless has slowed. Yes, IoT represents an important next area of opportunity but it’s going to take a while to get to those ‘billions of connected devices’. IoT is more a series of base hits than doubles or homers. As an example, total US wireless operator revenues from IoT (outside of tablets) are likely to be about 3% of total wireless revenues. Even if that grows 50% year-on-year for the next several years, investors want more. Which is why we’ve seen AT&T and Verizon aggressively expanding into new and adjacent areas of business in the hunt for top line growth and not betting all their marbles on IoT.

A wave of acquisitions. In addition to Verizon-Yahoo (and three IoT companies) and AT&T-Time Warner, we are seeing a spike of acquisitions across the mobile ecosystem, including Qualcomm-NXP, Broadcom-Brocade, CenturyLink-Level 3, and a large number of smaller deals. A combination of consolidation and horizontal integration.

Diminishment of hardware. The iPhone 7 is selling fine but, let’s face it – there has not been a new ‘must have’ phone since the iPhone 6 two years ago. Look at how hard it is for a smartphone OEM to gain share, regardless of how good the device is. The theme can be extended to the tepid reaction, in my opinion, to Apple’s new Macs, slowing tablet sales and the industry still figuring out how to blend the PC and the tablet where, ironically, much of the innovation is coming from Microsoft. It is really now about software and ecosystems.

Layoffs and disappointing spectrum auction. Major industry bellwethers are going through a fairly painful round of layoffs: Verizon, Sprint, Qualcomm, Ericsson, Cisco, and others. Earnings last quarter weren’t so great and the lower than expected bids in the 600 MHz auction show that spectrum values might have peaked. At the same time, momentum toward spectrum sharing opportunities has intensified.

The results at Facebook and Alphabet. A contrast to 3Q earnings in the core mobile business and perhaps the most jolting statistic: Facebook and Alphabet command nearly 70% of all mobile advertising.

These trends show we are now moving toward a new set of industry drivers and the emergence of some new players. Rather than one ‘mega-development’, such as the iPhone or launch of LTE, I believe the next phase of developments in mobile will center around five themes – maybe a bit different than the drumbeat around IoT and AR/VR that you have been hearing from other prognosticators.

1. Shifting Power Centers. This trend has been developing for some time. Leading operators are diversifying their business. Apple and Samsung are less dominant. Google, Facebook, and Amazon are innovating and ascending. Microsoft is coming back. Some of the hot Silicon Valley startups are about messaging and chat! In advertising, Verizon has its work cut out, but there is pent-up demand to break up the Alphabet-Facebook juggernaut.

2. Content. We are in the midst of a multi-year re-imagining of how content is developed, distributed and monetized. Mobile is poised to play an important role, especially in the development of shorter form, democratized content where the latest YouTube sensation can become a media star in a fortnight. A new/old player to keep your eye on here: Comcast (cue the eye roll). They are creating a new UI and ecosystem around the X1 Platform that others – notably Apple – have failed to do. Look at the softly launched Netflix integration on X1 and you’ll see my point. There has been some worry that network neutrality might get in the crosshairs of zero rating for video content, which is one way for wireless operators to gain some competitive advantage. We expect the Trump administration and a Republican-dominated FCC to either overturn net neutrality or tread very lightly.

3. The Rise of Artificial/Intelligent Assistants. This has its parallel with the rise of the mobile revolution, which was driven by the confluence of hardware, networks, processing, and app ecosystem. Here, it’s processing (per usual), cloud, big data, and voice-related technologies. We’re in the early innings of a re-imagined way in which we interact with devices – phones, TVs, and so on – and software/apps that are more proactive and do a much better job of talking to each other. This is one of the main reasons Google has intensified its push into the hardware business (Pixel phone, Google Assistant, Google Wi-Fi, Daydream VR headset and platform, new 4K Chromecast stick), with Google Assistant becoming more pervasive throughout the products.

4. New Network Economics. Leading network operators are in a race toward an evolved network. More agile and ‘internet like” is what you’ll hear AT&T’s John Donavan talk about when discussing ECOMP, which is AT&T’s new ‘network operating system’ it hopes other operators will adopt. Another important part of this is altering network economics, which I still think needs to be a bigger part of the discussion. Mobile networks are going to have to handle much more video traffic in the coming years and at a substantially lower cost per GB delivered than they do today. A related question is how fixed and mobile networks coalesce as we move toward 5G and to consider whether, circa 2020, households will still be paying for separate fixed and mobile subscriptions.

5. Back to the Future With Chat and Messaging. While the internet and telco giants work on their respective AI and SDN moonshots, the biggest thing in 2016 is turning out to be messaging and chat (here is a good piece in the WSJ from a couple of weeks ago by Christopher Mims). With apps getting into the game, these past few years have seen the messaging/notification world trending toward overload. Yes, these tools are becoming more important for workplace collaboration (Slack, Teams) but I think the longer game is about simplification and contextualization. And, over time, more functionality – paying for things, ordering things, predicting things – as an evolution from today’s app-driven framework that is starting to feel a bit cumbersome and stale. Uber’s new app provides a glimpse of where things are headed.

There’s no catchy term or moniker for this new phase because it applies to developments within the mobile ecosystem and to adjacent sectors that have a big effect on mobile. But these themes, added together, spell the biggest changes to the mobile sector since the iPhone/App Store/LTE perfect storm that emerged in the 2008-2011 timeframe and will define the mobile space for the next several years.

Unpacked for Friday November 18, 2016

Samsung’s Acquisitions Point to Increase Desire to Independence by Carolina Milanesi

This week was a pretty busy one for Samsung’s M&A team. On Monday, Samsung announced it would acquire Harman International for $8 billion. Later on Wednesday, Samsung announced it was acquiring NewNet CommunicationTechnologies (Canada) Inc.

In very different ways, both acquisitions point to a Samsung that wants to gain more and more independence from Google services in an attempt to strengthen its own ecosystem. This is not just driven by a need to differentiate from other Android makers but from a need to be prepared in case remaining on Android will no longer be an option.

Most coverage of the Harman acquisition focused on the car business for obvious reasons. For what it is worth, I think it is smart of Samsung to focus on the component side and position itself as a partner to the many car manufacturers rather than trying to build its own car. I am sure in South Korea alone, Samsung would find very interested partners in Hyundai and Kia. The other positive part of this acquisition is that components will end up in cars long before a finished product would get to the market, representing a much better short term opportunity for Samsung. Diversifying the client base for the component business is key as sales of smartphones slow.

The part of the acquisition less discussed has to do with the fact Harman International owns Harman/Kardon and JBL. There is an opportunity for Samsung to integrate any new/superior sound technology in their devices – Huawei had just announced at IFA their tablet MediaPad M3 featured Harman/Kardon audio. There is also an opportunity with JBL to go after the smart speaker business, especially considering the other recent acquisition of Viv. Samsung has played with S Voice before with little success but it is clear digital assistants will play a big role in the future and, although I am sceptical Samsung could pull off an experience as strong as Google, Apple, Amazon, and Microsoft can, I do believe there is a lot of opportunity for a voice-first UI to benefit Samsung’s products.

The acquisition of NewNet CommunicationTechnologies (Canada) Inc. is less flashy but a very interesting acquisition that could benefit both Samsung’s consumers and enterprise play. NewNet specializes in RCS infrastructure and services which enable things like high quality voice calls, group calls, video calls, file sharing and more. Of course, the first thought is this is an investment to acquire the capabilities of building a response to Apple iMessage and Google Allo. However, this could also help Samsung in their move into enterprise as it could give them the opportunity to build a secure Slack competitor.

The success on the consumer side will mainly depend on two key points: How engaging and differentiated the experience is going to be and how paranoid consumers are about using Google Allo. It will also be interesting to see if Samsung will create an app that will be Android-compatible vs limiting it to Samsung-only products.

On the enterprise side, I think there is a clear opportunity as Microsoft expressed in a lot of detail a few weeks ago when it launched Microsoft Teams. The solution, however, seemed more catered to very large enterprise possibly leaving small enterprise to look for something else. For many, that something else today is Slack, a success of BYOA to work especially among millennials. As Samsung is pushing its enterprise effort well beyond MDM into a fuller enterprise platform, this seems to be a perfect addition. We’ll see if they feel the same as the first results of the acquisition start to surface.

Apple’s iPhone Supply Constraints Might Worsen Next Year by Jan Dawson

Bloomberg is reporting Apple is planning to use OLED screens in at least some of next year’s iPhones but suppliers likely won’t be able to manufacture enough displays to outfit all new iPhones with OLED. Samsung appears likely to be the sole supplier and, given it has struggled to make enough screens for its own phones, this is likely to cause issues for Apple too.

We are, of course, still in the early stages of the previous iPhone sales cycle, with the iPhone 7 and 7 Plus having launched within the last couple of months. But it’s already become clear Apple is more supply constrained this year than it was last year and largely because it has moved to two new manufacturing elements which are simultaneously driving up demand and slowing down supply. The new dual cameras in the iPhone 7 Plus and the jet black finish available on some of the new iPhones are both more challenging from a manufacturing perspective and yet, also causing unusually high demand for the larger models. Apple’s comments on its recent earnings call suggest it may take longer to get supply and demand in balance this year than last year.

The shift to OLED would be a comparable one, both potentially driving up demand for new phones while also making them harder to manufacture. Apple has worked very hard over the years to secure an adequate supply of various components and materials for its phones, sometimes buying up most of the global supply for these items. But this becomes much more challenging when the suppliers are also Apple’s competitors, as is the case with Samsung. It simply can’t pay a premium to secure the totality of global supply from a company who also needs to supply its own mobile device arm. These obviously won’t be the first components Apple has bought from Samsung but, in past cases, it’s had other suppliers to use as both a hedge and for leverage. That won’t be the case, at least at first, with OLED displays.

All of this means Apple might simultaneously be in a position to drive yet another massive sales cycle for the iPhone from a demand perspective but may struggle to supply enough devices to meet that demand. One way to solve the problem is to make OLED a feature exclusive to the Plus-sized variant, much as it did this year with the dual cameras. For the last three years, Apple has made some features exclusive to the larger phones and it seems as though it wants to go further down this road, whether driven by supply constraints or by a desire to raise average selling prices and thereby drive faster revenue growth.

In all this, it’s interesting that Tim Cook, who oversaw the supply chain under Steve Jobs and led many of the strategies Apple pursued in the past, is now overseeing this challenging shift to components where that strategy can’t be pursued as easily. For all the criticism of Tim Cook that’s come from some quarters on the basis that he’s not a visionary but an operations guy, this is a big operations issue that Apple really needs to crack. So it seems well suited for his talents. It’ll be interesting to see how Apple resolves some of these challenges in the coming years.

Apple’s Design Book – by Ben Bajarin
On a weekly basis, we are reminded of how many people seem to just misunderstand Apple. You could argue the company itself is a Rorschach Test but so is each and every product. I’m not going to be blind to the reality that there are worthwhile things to criticize Apple for. But, too often, the things people criticize are the wrong areas to commit energy. The “Designed by Apple in California” book is the latest example.

I know folks don’t like, or fully understand this analogy, but Apple in many ways, is similar to a high-end car company. Perhaps Porsche comes to mind but so can Ferrari, Mercedes, etc. These are automotive brands where iconic designs and brands set them apart. Not everyone can afford one, nor does everyone plan to buy one, but their designs are nearly universally appreciated. Pay close attention to any one of these examples (throw in fashion brands or even high-end watch brands) and you will find similar design books sitting on the shelves of designer’s offices. The reason is for inspiration. Anyone who does end product hardware or industrial design will have a range of books, not unlike Apple’s Designed by Apple in California book, and they will lean on these for new inspirations as they are working out a design problem or looking for a new idea.

Subtly, this book is Apple’s attempt to give back to the design community in hopes to share how their ideas have evolved and showing many of the unique ways they have tried to create iconic products and designs. At $300, this product is not targeting everyday folks and it is foolish to think it is. People often forget there is a culture around Apple for some people and they appreciate the brand and identify with the emotional, creative, or other parts of the company. It is not uncommon to see an owner of a Porsche or a Ferrari also have similar printed materials either of books or pictures hanging on their wall because of their self-identificaiton with the product and what it stands for. Apple is very much like this and it is one the strongest things they have going for them — the emotion they bring out in a portion of their base who self-identify with Apple’s culture.

Intel Unveils Broad AI Vision – by Bob O’Donnell

At a special event in San Francisco, Intel debuted a sweeping new vision for the role it believes it can play in the rapidly evolving and highly topical field of Artificial Intelligence. The company put together an impressive set of messages that covered everything from definitions for the still little understood fields of AI, machine learning and deep neural networks, through silicon announcements, software unveilings, new customer partnerships and even a new sub-brand.

The company made clear that it believes the AI market is still in its infancy and that there are plenty of opportunities for it to make a very significant mark. The last point is important, because there’s been a great deal of press and attention to date on the role that GPUs can play in AI and deep learning, driven primarily by nVidia’s strong messaging work.

At Intel’s event, the company discussed a variety of different efforts they’re making to impact the AI market—an opportunity the company clears sees as being strategic to its long-term growth. On the silicon side, the company unveiled a new chip code-named Lake Crest, expected in the first of 2017, which uses the work done by Nervana Systems, the AI company that Intel purchased earlier this year. The new chip architecture is specifically optimized for deep learning algorithms and includes 32 GB of high-bandwidth memory (HBM2) and offers high-speed I/O and proprietary chip-to-chip protocols to handle very large deep neural network models.

Intel plans to use the Nervana sub-brand to help unify all its AI silicon and software efforts. Speaking of which, the company also described a complex set of software offerings that are designed to let data scientists pick from a variety of open source AI frameworks, including the company’s own Neon framework, which came as part of the Nervana acquisition. Essentially, Intel has created some core software that will optimize algorithms created in any of these frameworks to run quickly and effectively on a range of Intel hardware—from x86 CPUs, through Xeon Phi chips to Altera FPGAs and, eventually, to the Lake Crest family of AI chips.

In addition to the products, Intel announced several partnerships with companies such as Google and insurance company USAA to highlight their efforts. They also talked about a number of socially relevant efforts to use AI for good, such as working with cancer researchers and the Center for Missing and Exploited Children.

While the event had a bit of a “drinking from a firehose” burst of information, it’s clear Intel sees a strong opportunity for itself in AI moving forward.

The Click-Bait Business Model

One thing that has become abundantly clear as I reflect on the media landscape, post-elections here in the US, is how disastrous the click-bait business model has become. When we started Tech.pinions, it was a direct attempt to add sanity back into the public domain against the damage I was seeing done by tech publications thriving on the model. I won’t link to all of them but if you go back into the Tech.pinions archives in 2011 and 2012, you will see posts arom Tim, myself, and Steve Wildstrom (then the core team) calling out specific articles from publications and trying to shine a light on the deception they were posting.ceboo

The click-bait business model inherently encourages media outlets to be disingenuous with the truth. The danger in so many of these articles is they contain elements of truth but that truth is distorted and accompanied by a jaded opinion or bias. Sometimes, this is referred to as “Yellow Journalism”, defined as:

Yellow journalism, or the yellow press, is a type of journalism that presents little or no legitimate well-researched news and instead uses eye-catching headlines to sell more newspapers.[1] Techniques may include exaggerations of news events, scandal-mongering or sensationalism.[1] By extension, the term yellow journalism is used today as a pejorative to decry any journalism that treats news in an unprofessional or unethical fashion.[2]

journalism that is based upon sensationalism and crude exaggeration.

Examples of this type of journalism or sensationalist headlines date back many years as tactics to sell more newspapers. So there’s no surprise it came to the internet. However, it grew to an entirely different scale thanks to the web. If you can write a good headline, tell part of the truth or bend the truth, or do all of it to tell an audience what they want to hear, then you can have a good business in the click-bait era of online publishing. Since the internet enabled more free websites, which are supported by ads, the click-bait tactic became the way to get the eyeballs necessary to sustain your business. In 2009-2010, I helped run a particular tech website where, while not a total click-bait site, I still saw firsthand how profitable a website could be when it knew it had to use Google Ads smartly and drive massive pageviews to the site on a daily basis. The click-bait business model can be massively profitable but it also comes at a cost I’m not sure the owners of said sites care about.

The “free with ad-supported” business model is not bad but it led to the possibility of creating websites which are genuinely destructive. We need to preserve the opportunity for anyone to have a voice on the internet and the business opportunity to make a living but we also need to be wise about these types of sites and the underlying agendas often found in them.

This is true in all forms of verticals — sports, politics, news, tech, etc. As recent reports have discovered, the click-bait business model has led to a rise in entirely false stories and spoof websites of legitimate news websites. For example, this website, which appears to be trying to pass as an ABC news site, was continually linked to by folks on my Facebook feed. People were linking to articles about politics, the election, or false news items, and saying they came from “ABC News” so they must be legitimate. In fact, this site has no affiliation with ABC News. This is just one concerning example of how easy it is to thrive in a click-bait world and deceive the public.

For companies like Facebook and Google, this is a tricky balance. In no way do we want to limit our First Amendment right of free speech. This article from Ben Thompson yesterday on the topic of fake news was apt. The last sentence in particular:

It’s tempting to make a connection between the Miller fiasco and the current debate about Facebook’s fake news problem; the cautionary tale that “fake news is bad” writes itself. My takeaway, though, is the exact opposite: it matters less what is fake and more who decides what is news in the first place.

I am in 100% agreement it is a more dangerous precedent to allow a third party to determine what is news and what is not. It is, however, unfortunate this click-bait business model has made it more difficult for regular people to distinguish between what is true and what is not or, in this case, what are the facts of the news and what is spin, half-truth, or fabrication.

I have no idea what the solution is. I am encouraged by the trend of subscriptions growing to sites like the New York Times, the Wall St. Journal, and others which will hopefully fuel a new golden era of journalism. I know many people believe some of these news sites lean one way or the other but I’m hoping this entire election process was a wake-up call to the news industry to give the whole truth to the public — as hard as that may be for both the news outlet and their audience to hear.

I am an optimist and I like to look for silver linings. I’m hoping the big observations we can make today of the state of news and media can lead to a renaissance of fantastic journalism and writing where deep expertise and domain specific knowledge paired with fantastic writing and storytelling can again become part of the mainstream media. But we need to educate and/or somehow eliminate the click-bait business model or it may never happen.

Four Ways to Deal with Fake News Online

“Fake news” is the phrase du jour across the political, media, and technology domains over the past couple of weeks, as a number of people have suggested false news stories may have swung the result of the US presidential election. There seems to be widespread agreement something more needs to be done and, though initial comments from Facebook CEO Mark Zuckerberg suggested he didn’t think it was a serious problem, Facebook now appears to be taking things more seriously. Even with this consensus on the nature and seriousness of the problem, there’s little consensus so far on how it’s to be solved.

As I see it, there are four main approaches Facebook and, to some extent, other companies which are major conduits for news can take at this point:

  • Do nothing – keep things more or less as they are
  • Leverage algorithms and artificial intelligence – put computers to work to detect and block false stories
  • Use human curation by employees – put teams of people to work on detecting and squashing false stories
  • Use human curation by users – leverage the user base to flag and block false content.

Let’s look at each of these in turn.

Do nothing

This is in many ways the status quo, though it’s becoming increasingly untenable. But, through a combination of commitment to free and open speech, a degree of apathy, and perhaps even despair at finding workable solutions, many sites and services have simply kept the doors open to any and all content, with no attempt to detect or downgrade that which is not truthful. Mark Zuckerberg has offered in Facebook’s defense the argument that truth is in the eye of the beholder and that to take sides would be a political statement in at least some cases. There is real merit to this argument – not all the content some people might consider false is factually so and, in some cases, the falsehood is more a matter of opinion. But the reality is that much of the content likely to have most swayed votes is demonstrably incorrect, so this argument has its limits. No one is arguing Facebook attempt to divide one set of op-eds from another, merely it stop allowing clearly false and, in some cases, libelous content.

Put the computers to work

When every big technology company under the sun is talking up its AI chops, it seems high time to put machine learning and other computing technology to work on detecting and blocking fake news. If AI can analyze the content of your emails or Facebook posts to serve up more relevant ads, then surely the same AI can be trained to analyze the content of a news article and determine whether it’s true or not. I am, of course, being slightly facetious here – we’ve already seen the failure of Facebook’s Trending Stories algorithm to filter out fake stories. But the reality is computers likely could go a long way to making some of these determinations. Both Google and Facebook have now banned their ad networks from being used on fake news sites, so it’s clear they have some idea of how to determine whether entire sites fit into that category. It shouldn’t be too much of a leap to apply the same algorithms to the News Feed and Trending Stories. But it’s likely computers by themselves will find both false positives and false negatives. The answer almost certainly isn’t to rely entirely on machines to make these determinations.

Human curation by employees

The next option is to put employees to work on this problem, scanning popular articles to see whether they are fundamentally based fact or fiction. That might work at a very high level, focusing only on those articles being shared by the greatest number of people but it obviously wouldn’t work for the long tail of content – the sheer volume would be overwhelming. Facebook, in particular, has tried this approach with Trending Stories and then, in the face of criticism of perceived political bias, fired its curation team. Accusations of political bias are certainly worth considering here – any set of human beings may be subject to their own personal interpretations. However, given clear guidelines that err on the side of letting content slip through the net, they should not be prohibitive. The reality is, any algorithm will have to be trained by human beings in the first place so the human element can never be eliminated entirely.

Crowdsourcing

The last option (and I need to give my friend Aaron Miller some credit for these ideas)  is to allow users to play a role. Mark Zuckerberg hinted in a Facebook post this week the company is working on some projects to allow users to flag content as being false, so it’s likely this is part of Facebook’s plan. How many of us during this election cycle have seen friends share content we know to be fake but were loath to leave a comment pointing this out for fear of being sucked into a political argument? On the other hand, the option to anonymously flag to Facebook, if not to the user, that the content being shared was fake, might be more palatable. If Facebook could aggregate this feedback in such a way the data would eventually be fed back to those sharing or viewing the content, it could make a real difference.

Such content could come with a “health warning” of sorts – rather than being blocked, it would simply be accompanied by a statement suggesting a significant number of users had marked it as potentially being false. In an ideal world, the system would go further still and allow users (or Facebook employees) to suggest sources providing evidence of the falsehood, including myth-debunking sites such as Snopes or simply mainstream, respectable news sources. These could then appear alongside the content being shared as a counterpoint.

Experimentation is the key

Facebook’s internal motto for developers for a long time was “move fast and break things” though it’s since been replaced by the much less iconoclastic “move fast with stable infrastructure”. The reality is news sharing on Facebook is already broken, so moving fast and experimenting with various solutions isn’t likely to make things any worse. The answer to the fake news problem probably doesn’t actually lie in any of the four approaches I’ve proposed but in a combination of them. Computers have a vital role to play but need to be trained and supervised by human employees. For any of this to work at scale, the computers likely also need training from users, too. But doing nothing can no longer be the default option. Facebook and others need to move quickly to find solutions to these problems. There will be teething problems along the way, but it’s better to work through some challenges than throw our hands up in despair and walk away.

What Lessons can the Tech World Learn from 2016 Pollster Failings?

Two weeks before the election, I traveled to Maine to be at a conference. I picked up a car in Boston and drove through Massachusetts, New Hampshire, and Maine. Coming from California which is was mostly “Clinton country”, I had seen very few signs or bumper stickers promoting any candidates, at least here in Silicon Valley.

But as I drove through these areas of the US, I found all types of Trump signs on people’s lawns, fences, and bumper stickers on cars. I did see a few Clinton signs but Trump signs were the dominant ones displayed in the cities I drove through. In fact, as I was driving up HWY 1 to Freeport, Maine, home of LL Bean, I drove past a group of about 18 people with signs waving and promoting Trump.

This surprised me. Sitting in my Silicon Valley office, I was pretty insulated from how the rest of the US was perceiving both of these candidates and, like a lot of people, mostly trusting the media and the polls to guide our knowledge about how this election was evolving.

As I spoke to people in these three states which, interestingly enough, voted mostly for Clinton, I saw more passion in the followers of Trump than I did for Clinton. More importantly, when I asked why they were voting for Trump, their anger against the Washington elite and the political landscape they felt did not represent them anymore was at the top of their mind. As I flew home from this trip, I reflected on the comments of the Trump supporters and, for the first time, I thought he had a better chance of winning the election than I thought he would.

Now that Trump has won the presidency, I am seeing all kind of media reports on why he won and how in the world the pollsters got this so wrong. While there have been a lot of explanations, one key observation, from Dean Baquet, executive editor of the New York Times, should be a warning for Silicon Valley and their view that the world starts and ends with them.

Mr. Baquet praised his political team and other Times journalists for “agility and creativity,” citing articles about Mr. Trump’s taxes and Mrs. Clinton’s record in Libya. But in an interview in his office, he said, “If I have a mea culpa for journalists and journalism, it’s that we’ve got to do a much better job of being on the road, out in the country, talking to different kinds of people than the people we talk to — especially if you happen to be a New York-based news organization — and remind ourselves that New York is not the real world.”

I believe a similar parallel exists here in Silicon Valley. To paraphrase Mr. Baquet’s comment, “we need to remind ourselves that Silicon Valley is not the real world.”

I grew up in Silicon Valley and have seen its amazing growth. The technology that has come out of it has changed the world. But in some ways, we believe the world revolves around us and so much of our research and tech media is often just focused on those in the know instead of the real people who use the products.

I am reminded of something I learned early in my career when I often encountered a technology product that, to me, did not seem like a viable one. One that comes to mind is something I was shown at Xerox PARC in the late 1990s. Basically, it was a two-handed mouse for use as part of a new form of a user interface for computers. I, and many others, just could not see how this product made sense. But, when I asked the engineer who created it, his answer was, “because I could and I thought people might like it.”

To be fair, because of the globalization of tech, our technology is getting into the hands of people of all generations and income levels but we still often create products that are still too complicated and difficult to use and many of those fall by the wayside.

Taking a hint from the New York Times editor, as tech researchers and marketing professionals, we need to get out more and really talk to the people who use the products and gain greater insight on what they want and what will work for them. Here at Creative Strategies we try and do this often, including going to college campuses and talking with students directly or, when possible, visiting different cities around the US and talking to non-techies about their interest in technology and trying to dig deeper into their interests and demands.

I really think Silicon Valley needs to get beyond its insular way of thinking and start really listening to the people who will use what we create if we want to continue to see technology impact the world as we envision it can in the future. In our case, these polling failures have reinforced that we need to do more research in the real world most people live in and I hope more people in tech marketing do so as well.

The Perfect Ten of Wearables

Earlier this week, Bloomberg reported Apple is evaluating moving into digital glasses. According to “people familiar with the project who did not want to be identified” (when do they ever want to be?), the device would connect wirelessly to iPhones to show images and other information in the wearer’s field of vision. If there is a product and if Apple decides to actually bring it to market, it won’t happen before 2018.

It was not the news itself that made me think about this but rather the different comments I saw pop up on social media and in press commentary. They quickly pointed to a couple of interesting underlying misconceptions I thought it would be worth fleshing out.

Google Glass is not the Benchmark for Smart Glasses, Let’s Move On!
When we started talking about wearables, the list of devices was pretty long: bands, watches, pendants, straps, helmets, smart-fabrics, adhesive strips, cameras, and glasses. The initial vendor excitement was met with limited interest by consumers and, as vendors were trying to figure out what worked, we saw the focus centering more and more around the wrist. Google Glass very much helped that process of elimination.

But Google Glass’ flop does not mean there is no role for smart glasses. The key differentiating point between success and failure is the focus. Google Glass was not only early to market but it was also trying to be too many things at once, leaving users confused about its reason to exist. Google thought of Glass as a wearable in the same way we now think of a smartwatch – something we have on all the time. Yet, rather than focusing on a few specific tasks, Google Glass attempted to replicate many of the tasks our smartphones were performing. Glass was a camera, a search engine, an assistant, and one of the initial voice-first devices.

As people commented on the Apple rumor this week, many were quick to remind us of what Tim Cook said about Glass back in 2013 in an interview at D11:

“I wear glasses because I have to. I don’t know a lot of people that wear them that don’t have to. They want them to be light and unobtrusive and reflect their fashion. … I think from a mainstream point of view [glasses as wearable computing devices] are difficult to see. I think the wrist is interesting. The wrist is natural.”

As someone who has been wearing glasses since the age of three, I can certainly relate to what Cook meant. Glasses are not something you want to wear every hour of your day. Yet, I have no problem wearing glasses for specific tasks like sunglasses or swimming goggles. The difference here is shorter periods of time, focused tasks, and high return from the experience. This is what I think Apple would have in mind if they move into this space. Plus, of course, a design that would appeal to the mass market and would not scream “tech”.

Snap – formerly known as Snapchat – has taken part of the idea of Google Glass and made it commercially appealing to millennials. Spectacles have a funky design without being obnoxious, they are affordable, and the task they perform is perfect for the device. Snapping videos without being intrusive so as to capture the true moment and doing it fast (all you need to do is look in the direction of where the action is) sounds very simple and perfect for how Snapchat is used.

I Say Wearables, You Say Smartwatch

While one wears Spectacles, I would not put them in the wearables category, the same as I would not put a GoPro among wearables. When Cook said, “The wrist is natural”, he was looking at the Nike FuelBand he was wearing. So it would be safe to assume that, while he voiced his concerns about convincing people to wear something on their wrist, his focus was more about delivering a device that could be with us all the time so that it would learn from us and increase its value to us over time.

When we talk about wearables today, we really mainly talk about fitness bands and smartwatches and I see their role being very different from Google Glass. Their focus is to capture data as much as to display data. To be transmitters more than receivers. Think about all the sensors these devices have that help capture information which is then processed and used in different ways. Today the best showcase is fitness but, with time, the use cases will increase. They are certainly not portrayed as an all-powerful computing device. They are a companion device, especially for Apple, that might alleviate some of the load our smartphone has been carrying for so long. Interestingly, this view was not initially shared by the Android Wear team, who seemed to pitch wearables in a very similar way to Google Glass when it came to a do-it-all approach to replace most, if not everything, your phone does. The longer and more consistently you wear these devices, the greater the benefit. Which means they have to be extremely comfortable, somewhat fashionable and, if failing on the fashion part, they should almost disappear. While they might take over from your phone at times, they are not designed to be your main computing device for any long periods. The wrist is the ideal location for both collecting heartbeats and allowing you a quick peek of short and timely information.

AR and VR goggles are very different. Similarly to Spectacles, while you wear Oculus Rift, Gear VR, Google DayDream, Microsoft Hololens, their main function is to display content. For that reason, the proximity to your line of vision is critical. If you think about the main difference between VR and AR/Mixed reality, the former is about you being in a fully immersive world different from the one you are actually in and the latter is about enhancing your current world. You can see how different kinds of glasses or goggles will be required. Given Tim Cook’s public position on how AR is more interesting than VR, I can see how some of his comments about designing something people want to wear still apply.

More than the design, however, I would expect Apple to prioritize the experience in regards to safety and privacy. This might mean the use cases, at least initially, might be limited either by experiences or locations – like your car DVD player not playing on your dashboard screen when you are driving.

If true that the glasses will connect to your iPhone, it seems Apple is trying to avoid the battery issues Google Glass faced while, at the same time, showing it does not think slapping your phone in front of your face is the right thing to do even when that phone is an iPhone.

The Wearable Market of the Future
Many make projections about what the wearable market will look like by 2025 and the definitions of what is included are almost as many as the numbers thrown around. The wearable market will be much more complex than the PC and smartphone market ever were when it comes to devices that should and should not be included.

Whether or not Apple is really working on glasses will be confirmed in due course. But the fact they might be considering glasses does not negate what Cook said back in 2013. To be a wearable device in the strictest sense you need to be able to wear it 24/7 or very close to it. Wearing a device for part of the day does not make it a wearable, no more than being able to move an all-in-one desktop from room to room makes it a mobile computer. I see wearable technology as the next phase of “connected anytime, anywhere” — the main task of the devices we will be wearing will be to more clearly feed into AI and big data than feed off of them.

The State of the Smartphone Market in Q3 2016

You’ve probably seen headlines over the last couple of weeks about Apple accounting for over 100% of the profits in the smartphone business in the last quarter. I’m never a fan of that particular metric because it excludes all the smartphone vendors that don’t publicly report their financials, including several of the largest. Instead, I prefer to drill down a bit and look at what’s really happening beneath the surface, both in terms of shipments and in terms of the financials associated with those shipments.

Shipments – the big three steady, lots of movement below

When it comes to shipments, we’re seeing a fairly clear pattern emerging – the big three globally have been steady for some time now but there’s lots of movement below that. The big three are Samsung, Apple, and Huawei (in that order) and they’ve been in those positions for the last six quarters. Depending on who you believe, Samsung may have briefly ceded its number one spot to Apple two years ago when the iPhone 6 launched, but it’s safe to say Samsung has been consistently in that spot for a very long time. Apple has also been the consistent number two for a long time, while Huawei only ascended consistently to the number three spot last year. There are big gaps between these three, though, with Samsung typically tens of millions ahead of Apple in terms of shipments (except for fourth quarters) and Apple, in turn, 10-20 million ahead of Huawei except in the fourth quarter, when Apple sells far more than in the other quarters.

However, what’s become really interesting to watch over the last couple of years is all the turmoil in the next few spots in the smartphone rankings. The two biggest vendors to look at are Oppo and Vivo, both from China, which have come seemingly out of nowhere to take the fourth and fifth spots, pushing others further down the ladder. Xiaomi, meanwhile, which has flirted with the number three and even the number two spots, has fallen down to fifth or worse. Perhaps more remarkably, of the top 10 vendors by shipments, seven are Chinese, with only Samsung, Apple, and LG the exceptions. Three countries now make up the entire top 10. But it’s also worth noting that three of the largest vendors – Oppo, Vivo, and OnePlus – are all owned by the same company, BBK Electronics.

The chart below shows how rankings have changed over the past few years between the largest vendors.

Financial reporting and margins

Of course, shipments are only one metric and say nothing directly about finances. Yes, there’s something of a correlation between scale and profitability but it’s far from linear. Much depends on which segments these companies target, how differentiated their offerings are, and the selling prices of their devices. Those targeting the premium segment tend to have far higher margins – especially at scale – than those targeting the mid-market or low end. Though Apple is exclusively focused on the premium market and Samsung sells its Galaxy S and Note offerings into that market too, much of the volume among these top ten is very much low-end stuff. As a result, margins for even the largest players are likely to be low, especially since they’re mostly selling relatively undifferentiated Android phones.

Sadly, we don’t have financial reporting for many of these companies, a number of which are privately held or subsidiaries of larger companies (or both). However, margins for these mobile businesses and, in Apple’s case, its overall business are shown in the chart below:

screenshot-2016-11-14-16-49-47

As you can see, there’s been a fairly clear dichotomy between the two makers that sell premium smartphones at scale and the rest of the market. As I wrote about previously, Samsung saw a big dip in margins in its mobile business this past quarter due to the Note7 recall but, in general, is well ahead of most of the other vendors. However, it’s also well behind Apple’s overall margins, which are relatively representative (though perhaps a little low) of its iPhone business. This past quarter, HTC, LG, and Lenovo were all in the red and several of them have been so for some time. Sony bounced back into the black recently, having spent most of its time as an independent company following Ericsson’s exit from the business generating losses. It is finally now turning things around thanks to its focus on the premium segment, which has resulted in lower shipments but higher margins. LG, who had generated some positive momentum in 2015, has slipped further behind again and is back in the red. HTC continues to be only marginally in the market, with very low shipments and significant losses over the past couple of years.

A preview of Q4

I’ve focused here mostly on the historical picture but it’s worth thinking briefly about next quarter, too. We might just see Apple pip Samsung to the number one spot this quarter, as Apple rebounds from a year of shrinking sales and Samsung sees a continued lull following the Note7 fiasco. Behind them, the next three spots are likely to remain unchanged and I suspect the broad financial picture will remain fairly consistent as well. The big question for some of these vendors is how long can they remain in the market with, in some cases, substantial losses. As the upstart Chinese vendors continue to take share of the Android market, some of the more established vendors will definitely need to consider following Sony’s example and refocus on the premium market, or get out entirely. It’s going to be almost impossible for them to compete in a race to the bottom.

The MacBook Pro and Touch Bar Experience

The Big Upgrade
Arguably, the new MacBook Pros are a noticeable boost for those who need to upgrade their Mac. My MacBook Pro is a 2012 and, while it works fine for most tasks, was starting to show its age in many ways. I’ve been using the new 15” MacBook Pro with the Touch Bar. When I first saw the screen, I felt like it was larger than my older 15” MacBook Pro — in part, due to the screen resolution but also the slightly smaller bezels on the left, right, and bottom. The new MacBook Pros are also noticeably thinner and more compact than the previous designs. The battery life, even for a pro machine, is similar to that of the MacBook Air even iPad Pro for continual use, both which get a solid full day 8+ hours of working computing time on a single charge. Lastly, the keyboard is one of my favorite features. I’m picky when it comes to typing because I am a touch typist. I write more than 5,000 words a week and how the keyboard feels is crucial to me. To be honest, I did not like the keyboard on the original MacBooks but the second generation butterfly mechanism is dramatically improved in my opinion, and I love not just the feel but the sound.

The Touch Bar
Let’s talk about the Touch Bar. Without question, the Touch Bar is an enormous feature upgrade from physical function keys. What was once a static and fixed 13 button function key space has become an infinite set of dynamic possibilities once the row of keys is displaced by a strip of glass and clever software. After just a few minutes of using it, you quickly wonder why this had not been done ages ago.

That being said, there is a learning curve. The Touch Bar represents a dynamic shift in workflow. You have to begin the journey of discovery to understand all that it can do. This learning curve is short but, since the Touch Bar is capable of so much, I found myself experimenting quickly with all it could do to understand how to use it to enhance my work.

One of the things that stood out quickly was how many actions the Touch Bar could absorb that usually required more work or Track Pad swipes. In fact, there are many times where you can see an application smartly take advantage of the Touch Bar and dramatically limit, if not eliminate, the need to use the Track Pad for many tasks. The power of the Touch Bar is in its ability to contextually understand what you are doing, or the app you are in and offer up the most common buttons or menu items. For example, while writing this post, common text formatting options are right above my fingers, instead of off to the side or on top of the application window.

touchbar

In this use case, formatting text still requires the Track Pad to select the text elements I want to format. But having the Touch Bar display the likely formatting actions is faster and more efficient — tap, select the text, then move the mouse over to the menu to the right or top of the screen to select my formatting option. I know this seems like a simple use case, but the efficiency of doing it this way is quite an improvement in workflow. This is particularly true in areas where options may be several layers deep in a menu. Actions like these highlight how much efficiency the Touch Bar adds to workflows when the software, or the user, customizes it to take advantage of the dynamic capabilities it offers.

After the experimentation stage where you spend time trying out and learning all the things it can do, there comes the new habits or new workflow phase. I’ve already found some use cases where my default behavior is tapping the Touch Bar for actions vs. using the Track Pad. A simple example of this is with Safari. I now use the Touch Bar exclusively to switch or open tabs, search the web, etc. Again, this seems simple, and it is but, regarding speed and efficiency related to the action you want to do, it is actually quite efficient.

Perhaps the best way to think about this, from a workflow perspective, is finger travel vs. mouse/trackpad travel. To accomplish some of the simple use cases I mentioned above — text formatting, selecting the text, scrolling over to the menu item on the top of the screen to select an option, scrolling to the menu on the right of the screen to further format — requires quite a bit of mouse travel up, down, left and right. The Touch Bar removes many use cases where the mouse has to travel distances on the screen and can be done with only slight travel of the fingers up to the Touch Bar. In all these experiences, the amount of time it takes to accomplish the task is less, thus making for more efficient workflows due to less travel of the hands or mouse.

Having used the iPad Pro as a primary computer for extensive lengths of time, as well as many Windows-based touchscreen PCs, the similar workflow benefits reveal themselves once you limit how much you need to use the mouse for scrolling or selecting. The speed to tap is often faster than the time it takes to move a cursor. The difference here, between a touch screen workflow and a Touch Bar workflow, is fundamentally limiting how far your fingers or hands need to go. This is why I believe Apple is holding to their philosophical viewpoint of not adding a touch screen to the Mac to limit the amount of travel that fingers, hands, or arms need to do to complete a task. Apple is focusing on keeping the action where the fingers are and limiting the amount of movement and time it requires to complete steps in your workflow.

To Touch Bar or Not
I started this piece saying the MacBook Pros are a big upgrade in many ways over their older designs for the display, speed and performance gains, more compact industrial design, and all the added perks and features a new machine brings. According to our internal Creative Strategies research, approximately 19% of Mac owners have a Mac that is five years old or older. This compares to roughly 21% of consumers with Windows PCs 5 years or older. There are undoubtedly many people in need of an upgrade and, for them, the new MacBooks are a solid one to consider. The question remains whether those folks spend the extra money to get a Touch Bar or non-Touch Bar version of the new MacBook Pros. Given the incremental price difference, I imagine this question is top of mind. Here is how I’d think about it.

From my experience, the Touch Bar adds significant value regarding efficiency and workflow, given how dynamic and predictive it can be. But, to truly make a case for this feature, you have to be willing to bet on Apple’s third party software development community optimizing their Mac-based apps, or creating new ones, to benefit from this feature. If you are willing to bet Mac software developers will take advantage of the Touch Bar, then I wouldn’t hesitate on spending the extra money. There are so many unique opportunities for software developers to make it faster and easier for their customers to get work done by integrating the Touch Bar into their software. I’ve already experienced this with Apple’s first-party software, and I’m excited to see what third parties do with the Touch Bar. It’s one of those features that, once you start using it, you want to use it with all your Mac software — but we just aren’t there yet. Many developers, like Microsoft and Adobe and others, have already committed to releasing updates which take advantage of the Touch Bar so clearly there will be apps beyond Apple’s. This is an experience that will only get better as software developers step up to the opportunity and create new experiences.

It will be interesting to see how Apple moves the Touch Bar forward as well. Given the predictive and on device machine learning features Apple integrates into iOS, so the software adapts and conforms to your unique needs, it is possible they apply this same approach to the Touch Bar. Perhaps, over time, macOS can learn my core behaviors and most common tasks and workflows and begin to have the Touch Bar adapt and become even more predictive and proactive in offering me the kind of software buttons or menu items I need in the context of my work. Right now, the developer is in control of applying the right contextual buttons to the Touch Bar, or the user does it through full customization. It will be interesting, in the future, if a form of artificial intelligence can play a larger role in showing me Touch Bar options when I need them based on my unique workflows.

Touch Bar vs. Touch Screen
Inevitably, any element of this discussion will shift to the difference in philosophy between Microsoft with Windows and touchscreen-based notebooks and desktops and Apple’s philosophy with touch-screen tablet computers and Touch Bar-based notebooks. Ultimately, in my opinion, the fact there are so many options is what matters and what is exciting. These companies are showcasing their best attempts to help you get more out of your computer and do your job more efficiently and more productively. What consumers need to decide is which style is best for them and their workflow.

In both the case of the Touch Bar and touch screen PCs, what matters is not the philosophical differences but what software developers do to leverage the unique hardware that will be on the market from Apple and Microsoft’s partners. A lot of jobs exist in the world that need something more than a smartphone or a tablet and, for those folks, they have more choices than ever to help them get their job done.

Apple’s Newest Product: Used iPhones

Apple this week started selling refurbished iPhones in the United States through Apple.com. Of course, wireless customers have long had the opportunity to buy used iPhones (and other brands) from their carriers and other retailers and online sellers. However, the fact Apple is now offering the devices direct (along with long-available refurbished products such as Macs, iPads, and iPods) is notable. It reflects both the maturing nature of the market and Apple’s desire to put the iPhone into the hands of more budget-constrained smartphone buyers while still making enviable margins.

The Collapse of Subsidy Models

Just a few years ago, the U.S. smartphone market was almost entirely subsidy driven, which meant few people actually knew how much a smartphone cost. What they knew was, roughly every two years, they could pay $200 to their carrier and get a new smartphone. When U.S. carriers began to move away from the subsidy model, many people were shocked to realize that they were paying upwards of $600 or $700 over the lifetime of that phone. As that realization took hold, the market began to bring forward a long list of new financing options.

Today, buying a smartphone is a lot like buying a car. If you’ve got the cash, you can pay for the whole thing upfront or you can pay through an installment plan (essentially a loan) or you can sign up for something very akin to a car lease in which you pay a monthly fee and get a new phone roughly every year. All of the plans effectively shift the cost of buying a new phone around but none really help lower the cost. That’s why the refurbished market is important. While there are fewer options when purchasing a refurbished phone (you typically pay all up front), the savings can be substantial.

Certified Pre-Owned Luxury Phones

Back in September 2015, Apple launched its own iPhone Upgrade Plan along with its iPhone 6S lineup, directly competing with its carrier partners. Apple customers could pay a monthly fee (starting at $32 per month) and then, one year later, swap that iPhone for the next one. The iPhones Apple is now selling online are likely among the first batch of phones it collected through that program.

Companies such as Gazelle have long bought people’s old smartphones but, in the early days, those phones were typically several years old, which meant companies often shipped them to emerging markets where they were either resold or scrapped for parts. One-year-old phones, on the other hand, still retain a great deal of their value, which is why Apple got into the game itself. In fact, based on IDC’s estimates, Apple is adding significant additional dollars of per-device profit for every phone it first ships out as part of the upgrade program and then reclaims and resells a year later. All the while, it’s also increasing the number of iOS users and bolstering services revenues as a result.

So, for example, right now you can buy a 16GB iPhone 6S for $449, $100 less than the same new model on BestBuy.com (Apple cleverly doesn’t offer a new version of the 6S on its site with the same memory configuration, but a 32GB version of the 6S runs $649 on Apple.com). Similarly, a refurbished 64GB 6S Plus sells for $589, which is $750 new on BestBuy.com.

The result: A buyer who is interested in buying an iPhone now has a wider range of options available to them. The entry-level price for what was Apple’s top-of-the-line phone a little more than a year ago now starts at $449 instead of at $549. Not exactly cheap, but certainly more attainable for somebody who is able to pay the entire cost up front. At some point, I would expect Apple to start selling refurbished versions of the iPhone SE, which has a starting price of $399 new. We could expect prices in the $299 range or lower.

Most companies that sell refurbished phones closely inspect those phones before reselling, offering what amounts to a certified pre-owned checklist at a used car dealer. One of the advantages of buying refurbished from Apple, however, is they not only certifies the phones but they also install a brand new battery and outer shell. The new battery piece is huge, as that’s clearly one of the areas of most concern when buying a used device. And by including the new battery (which costs Apple very little), the company ensures the buyer has a good experience for the life of the product.

Who Buys Used?

Techies might not find the idea of buying a used device appealing (they’re likely the ones signing up for the annual refresh and creating the supply of one-year-old phones) but a growing percentage of regular consumers clearly see the value. In a recent IDC survey of US smartphone users, 26% said they were “somewhat likely” to buy a well-maintained 1-year old phone if the price were lower; another 14% said they were “highly likely” to do so. Among current iPhone owners specifically, those two percentages were 25% and 9%.

While there’s clearly a market for refurbished iPhones here in the US, the larger play for Apple long-term is its ability to move refurbished iPhones into markets outside of the US. For years, pundits have wondered when Apple would ship an inexpensive iPhone geared toward emerging markets. It seems increasingly clear Apple’s answer is to sell refurbished iPhones instead.

It’s a smart plan that will likely work quite well for Apple once it figures out two key issues: Which countries’ consumers are amenable to used products (not all are) and which countries’ regulations will allow refurbs to flow in (not all will). Once Apple sorts these issues out, I expect we will start to see Apple itself promoting refurbished products in more countries at more prices.

At IDC we’re now closely monitoring the used smartphone market as we firmly believe growth here could have a significant impact on shipments of new phones in the future. Apple won’t be immune to the potential negative impact of this secondary market. But, by creating its own virtuous cycle through its upgrade plans and refurbished offerings, it makes more money now and better controls its market position down the road.

Unpacked for Friday November 11, 2016

Snap Starts Selling Spectacles Through Bots of a Different Kind – by Jan Dawson

Snap (formerly Snapchat) on Thursday started selling its Spectacles camera glasses through a vending machine (dubbed a Snapbot) in Venice, California, close to its headquarters. A line quickly formed and the vending machine sold out of the Spectacles at least once before being refilled. Snap also employed Ellen DeGeneres as an early tester and had her share her experience, appropriately enough, through Snapchat. The company also launched a Snapchat filter allowing users to virtually try on a pair of Spectacles.

Snap has always indicated it had a small production run in mind for Spectacles, at least at first, and its distribution strategy certainly reinforces that idea. A single vending machine is never going to sell a large number of Spectacles, even if it’s moved around from place to place roughly every 24 hours. But, of course, selling a large number of Spectacles isn’t Snap’s goal here – creating buzz, excitement, and a sense of exclusivity is. Given the high markup for Spectacles currently selling on eBay, it seems the strategy is working and the company certainly got plenty of buzz through what’s essentially a viral marketing campaign.

At some point, Snap will have to evolve beyond this early strategy if it’s to sell Spectacles in any sort of volume. The question is, just how many people will want to buy the glasses, which are relatively expensive, fairly obtrusive and, of course, better suited to the summer months than the winter (which should theoretically be arriving any day now despite the warm weather people in many parts of the US are currently enjoying). This start, though, with artificial scarcity coupled with social buzz, is a great way to test the market and seed early adopters with devices. It’s telling that Snap isn’t making review units available to journalists or traditional gadget reviewers – this will be very much a word of mouth marketing campaign, as befits a social company.

The big question is where Snap goes from here. It will likely have to move to some combination of direct online distribution and third party retailers over time to support significant scale. It’s a safe bet it will pick retailers other than those who typically distribute consumer electronics, likely including some fashion brands. In some ways, this will be the most interesting tech product launch from a distribution perspective since the Apple Watch, which also played to fashion and jewelry audiences not usually associated with tech products.

In some ways, the most amazing thing about the launch was no one was really talking about whether the Spectacles actually work well. Towards the end of the day, some tech blogs managed to grab some of the early buyers and get their feedback and it seems to be largely positive. The simplicity of the glasses is their strongest point and, of course, their integration with Snapchat is a huge strength, though it seems as though the videos can also be shared to other social media. I’m guessing we’ll be seeing circular videos shared more extensively on Facebook and Twitter in the coming months, but it’ll be a slow build given the limited distribution, at least for now.

Foldable Phones Might Be Better Off  Not To See The Light of Day – by Carolina Milanesi

The Verge reported this week that Samsung filed a patent back in April for a foldable phone. The drawings show a narrower phone with a hinge similar to what you see on a Surface Book that bends inward to close on itself like an old fashion flip phone. The folding movement is said to be automatic or semi-automatic.

A couple of weeks ago, Patently Apple uncovered an Apple patent that refers to a bendable, foldable iPhone using nanotube structures. The iPhone differs from the Samsung design in that it looks like it closes like a book.

I do not really want to get into the details of what is needed to make bendable phones that are commercially viable. Lenovo showed a concept earlier this year and so have other manufacturers.

My question is really about the need to have a bendable phone. There seem to be two main reasons: giving us more screen and protecting that screen. While I do not think smartphones should be growing much more in size, there is still room for giving us more screen without growing the overall real estate of the device. I also think there is a balance we have to think about when it comes to a device we have with us all the time. The reason why there are still consumers who like the less than 5” phones is they want something compact. Foldable might help with the size of the device but it is unlikely to help with the thickness. When I saw the iPhone patent design, I immediately thought of the many 2-in-1s that have a foldable design vs a detachable one. The weight of those devices is less than ideal and hinders the experience. As far as screens Gorilla Glass is getting better and better and our data shows major screen breaks are actually less common than we are lead to believe.

Apart from phones, however, there is a lot of opportunity for bendable technology, especially if you think about wearables. Here we have seen some curved displays but not yet a bendable one that has the guts of the device spread around your wrist — the wristband is not just an accessory but a functioning part of the device. Think how much more accurate the heart monitor could be if the sensor was where you usually take your pulse without you having to wear your watch with the screen positioned there.

VR and AR seem like another area where bendable could benefit the experience. While you can turn your head to see things around you, there would be a benefit if the headset was stretching more around your head so your eyes would have less of a blind spot and a more fluid field of vision. I am very shortsighted and I see a big difference between wearing glasses where I clearly have blind spots vs. contact lenses which allow me to really see more. Curved TVs, although not changing your experience dramatically, do help to immerse you more in the content.

There are still hurdles to a commercially viable, bendable phone but even if it could be done it does not necessarily mean it should. We also might see different iterations rather than what we have seen in the patent drawings, more aligned with what the market will call for by the time the technology is ready.

Oculus Software Update Lowers PC Requirements for VR Headset – By Bob O’Donnell

One of the more exciting developments expected to drive growth in the PC market is interest in virtual reality and head-mounted displays. The problem is the hardware requirements for the PC used to drive those headsets has been very high. That, in turn, translates into expensive new PCs—typically at least $1,000, but sometimes even more—which severely limits the potential market size for these exciting new devices.

Yesterday, Oculus took a big step toward reducing those costs—and expanding the potential audience for their Rift VR headset—with a new software update. The update leverages technology the Facebook-owned company calls “asychronous spacewarp.” Though similarly named to the “asynchronous timewarp” technology the company introduced with the official Rift launch back in March, “asynchronous spacewarp” is different and has a key advantage: it essentially allows the Rift to deliver what’s said to be a quality experience at just 45 fps (frames per second) instead of the minimum 90 fps typically required.

Translated, that means you can now get away with a less powerful (and less expensive) video card to drive a Rift experience. In theory, that means you buy a cheaper new PC and still successfully use the Rift. Realistically, though, it means a large collection of existing gaming PCs can likely be pressed into service—at no extra cost for their owners.

Specifically, instead of requiring an nVidia GTX 970 or AMD Radeon 290 GPU, the Rift can now be run on a system with any nVidia 900 or 1000 series or any AMD RX 400 series GPUs. As you might expect, the experience isn’t supposed to be as good as you would get with a newer GPU but, for existing gaming PC owners who have been dying to try a Rift, this could be a good option.

Over time, of course, the CPU and GPU requirements necessary to do high-quality VR and AR will fall into mainstream price points and be available to virtually anyone who buys a new PC. Until then, however, these kinds of software innovations will be increasingly important to introduce a wider audience to the wonders of VR.

The Danger of a Device-Based Approach to Assistants

With Amazon Echo and Google Home now both in the market, dedicated device-based personal assistants have been in the news quite a bit lately. I’ve been called several times by reporters asking me whether Apple, Microsoft, or other companies need to have similar devices. But if there’s one thing the reviews and my own experience with these devices has taught me, it’s there’s a danger in equating your digital assistant with a device.

Note: this isn’t a review – but Carolina did a great review of her early experience with the Home yesterday.

Equating the assistant with the device

Amazon’s Echo began life as the only home of its personal assistant, Alexa and, although Alexa is now available on several other devices, my guess is the vast majority of users still equate the assistant with the device. Google, meanwhile, has made Google Home the entry point for its own Google Assistant and, for many people, Home is the only place they’ll be able to experience the Assistant for now, given the low uptake of the Allo messaging app and the high barriers to smartphone switching.

The downside here is, as people equate the assistant with the device, they will also equate failures by the assistant with failures of the device. When the entire purpose of a device like Echo or Home is to act as an assistant, to the extent the assistant fails to do its job, the device becomes useless. This is, importantly, very different from the likely reaction to failure by Siri or Cortana, which are mere features on devices that do much more. If we’re unhappy with Siri’s performance, we might well fall back on other ways to interact with our devices or be more selective in the scenarios for which we use Siri rather than the touchscreen because we have options. We may also choose to try again at a later time when the software has been updated because the assistant is still there on the device we’re using for lots of other things. But a device whose sole purpose is to be a good voice assistant and fails at that one job fails entirely and we will likely be tempted to return it or, at the least, put it away.

An assistant trapped in a box

The other challenge with equating an assistant with a device is users can easily have the sense the assistant is effectively trapped in the box. This is very much the case with Alexa, which doesn’t yet exist on smartphones or mainstream wearables. Leave the house and you effectively leave Alexa behind where she can’t do you any good at all. The Google Assistant has been designed with a much broader eventual footprint in mind but, for now, Google has limited its availability to this home device, a smartphone that will sell in small numbers, and an obscure messaging app. That’s a deliberate decision on Google’s part to sell more devices but it also means the Google Assistant will be similarly trapped in the home for many users.

Even when they venture out of the home, having many people’s first experience with the Google Assistant be tied to a larger device with far-field voice recognition technology risks disappointment when people then try it on a smaller device which is less effective at interpreting commands. Conversely, if Apple or Microsoft ever bring their existing virtual assistants to in-home devices like the Echo and Home, users may be pleasantly surprised at the improved voice recognition and will also enjoy a more mobile experience with assistants also present on their other devices.

An assistant that needs an assistant

The other thing that’s struck me again as I’ve been using the Home over the last few days is how important the companion mobile app is, something I noticed with the Echo as well. It’s almost like my assistant needs an assistant, not just for the initial setup but also for other subsequent experiences as well. One of the great advantages Google has in this space is the massive trove of web data it has to tap into but, of course, much of that data is visual in nature – having the Home read you a recipe all in one go is a terrible experience if you actually want to cook something and, of course, image search is also completely useless on the Home. You need the companion app to make sense of those things but, if that’s the case, then why not just use your phone? And, once you’re using your phone for some of these interactions, why not use it for all of them?

I’ve found that what you really want in quite a few of these interactions is to have voice interaction as the primary interface but with some kind of screen as a confirmation or feedback interface as well. A phone (or an Apple Watch or Android Wear device) gives you that combination but the Echo or the Home don’t. One of the frustrations I’ve had with the Home is that, when it fails, it’s often not clear whether that’s because it misinterpreted what I said or because it simply hadn’t been programmed to deal with the request, even when properly understood. A screen of some kind can eliminate that ambiguity.

Best as part of an ecosystem of devices

It’s early days in the history of these assistant-speaker devices for the home and I’m sure we’ll see some meaningful advances in the future. But I’m still finding the utility and performance of these devices is more limited and frustrating than transformative in my home. And I’ve been reinforced in my belief these devices have to be endpoints, not the endgame when it comes to virtual assistants and that such virtual assistants will only be truly effective when they’re part of an ecosystem of devices and not just a single device.

Smartphones and the Future of Digital Cameras

For about 20 years, I have been a huge fan of digital cameras, both DSLR and Point-and-Shoot versions. Almost every year, I would buy the newest model to take most of the really important pictures, especially one’s related to my family. I also love to capture things like sunsets and all types of seascapes and architecture as I travel the world. I am by no means a professional photographer but have taken many classes on photography and have learned enough to take pictures that I consider relatively decent.

But in 2007, when Apple introduced the iPhone and included a camera, my smartphone has been the real workhorse when it comes to taking pictures these days. In the early days of the iPhone, the camera had low pixel counts and, while OK, it did not come close to equaling the quality of the images I would get on a DSLR or even some of my high-quality point-and-shoot models. So, I would also carry one of those cameras with me when I traveled as well.

But since 2011, when Apple really started adding a better camera with higher pixel counts and included more imaging features in hardware and software, my reliance on DSLRs and point-and-shoot cameras started to decline. Almost all other mid to high-end smartphones have included high-quality cameras as well and helped make the smartphone the #1 device for taking pictures.

On a recent trip to Maine, during my spare time, I took side trips to capture the changing colors of the trees. I did not even take a separate camera on this trip and instead relied solely on my iPhone 7 Plus with its advanced camera features.

I am not alone in transitioning from stand-alone cameras to smartphone cameras as the primary way people take most of their pictures. As the chart below shows, starting in 2011, demand for DSLRs and point-and-shoot cameras began to decline substantially. They peaked at 121.5 million in 2010 and, in 2015, only 34.5 million were sold. Estimates for 2016 is only 13.5 million will be sold worldwide.

point-and-shoot

While having the camera in the smartphone makes them the handiest camera one can have, there is another key reason why smartphones are pretty much obliterating demand for DSLRs and point-and-shoot cameras. Although the quality of the images themselves are a major driver, another factor is how easy it is to take a picture and instantly post it to Facebook, Twitter, and Instagram or hundreds of other social media sites that have become integrated into our digital lifestyles.

Social media and online photo storage have completely changed the market for anyone who takes pictures these days, even for some professional photographers who are now using the iPhone 7 Plus on a regular basis. The ability to take a picture and immediately send it to the cloud for storage or instantly used in a social media app makes the smartphone camera the most versatile photo taking tool one can use.

What surprises me is how long it has taken the DSLR and point-and-shoot makers to understand this. Yes, some have put wifi connections in their cameras and made it possible to export them wirelessly to some cloud storage solutions but few took the time to really integrate great software into these cameras to take those photos and integrate them into social media easily.

Also, they restricted these cameras to wifi hotspots instead of doing some type of innovative deal to include a cellular radio in their cameras so these connections could be made anywhere like they are in a smartphone. Three years ago, one of the cellular carriers told me they had a program for camera makers but could get very little traction. While costs of the connection could be an issue, given today’s data programs that tie many cellular radios to a family account, using a cellular radio in a DSLR or point-and-shoot is a viable option for these camera makers.

However, I believe that ship has sailed and these types of cameras are going to end up in a niche category. In 2017, we will probably sell less than 10 million total worldwide. Instead, next generation smartphones, especially when they get the high-end features Apple has embedded in the new iPhone 7 Plus and they start having capabilities now in DSLRs, will pretty much replace point-and-shoots altogether and DSLRs become the tools for only pro and semi-pro photographers.

The Big Six in Q3 2016

Every quarter, after most of the big tech companies have reported earnings, I do a roundup comparing some of the key metrics for the “big six” consumer technology companies – Alphabet, Amazon, Apple, Facebook, Microsoft, and Samsung. Here’s this quarter’s analysis.

Revenues – Apple top dog, but Alphabet passes Microsoft

Apple has been the top dog by revenue on a trailing four-quarter basis for some time but, in some ways, the biggest news symbolically this quarter was Alphabet passed Microsoft:

trailing-4-quarter-revenue

Both companies had around $85 billion in revenue in the last twelve months, well behind Amazon at $128 billion, itself well behind Apple and Samsung at $216 billion and $179 billion over the same period. At Amazon’s current growth rate and with Samsung’s recent struggles, it’s possible Amazon could eclipse Samsung in scale in the next couple of years. Facebook, of course, continues to be the smallest of the six by some margin:revenue-growth

Interestingly, Facebook signaled on its earnings call it’s likely to see somewhat slower growth going forward as it begins to saturate ad loads. It will be worth watching to see to what extent Amazon and Alphabet can begin to close the gap on growth rates. Apple, meanwhile, continued its revenue growth turnaround and projected modest year on year revenue growth in the next quarter. Microsoft has just hit positive growth again for the first time in a long while, mostly thanks to the anniversary of the Windows 10 launch, which requires revenue deferrals in accounting.

Margins and profits

As well as being the fastest growing company in our set, Facebook also has the highest margins:

operating-margins

Apple’s margins have taken a slight hit over the past year as revenues shrank but should begin to rebound in the coming months as revenue growth returns. Amazon, of course, is by far the least profitable from a margin perspective and even dipped slightly from its recent modest improvements in margin in Q3. Samsung’s margins took a hit from the impact of the Note7 recall (as I wrote about last week), while Microsoft’s margins have bounced around quite a bit as it has taken write-downs.

A very different picture emerges, however, when you look at dollar profits rather than just margins:

dollar-profits

Here, Apple blows away the competition, with around $60 billion of operating profit over the past year, roughly equivalent to Facebook’s revenues since its IPO in 2012, and approximately ten times Amazon’s operating profits over the past three plus years. Microsoft, Samsung, and Alphabet have all generated around a third of that amount in the past year, though their trajectories are rather different, with Alphabet ascendant while Microsoft and Samsung stall somewhat.

Mixed trends in capital investment

These companies compete across a variety of markets but one area where three compete very directly is in enterprise cloud services, where Amazon, Alphabet, and Microsoft each have a strong presence. However, the investment trends behind those cloud services are quite varied between these three companies. Alphabet’s capital intensity has been falling over the past year and a half, especially since Ruth Porat took over as CFO and instituted something of a period of austerity at the company. Amazon’s capital intensity has dipped a little last year but has been rising for the last two quarters – one of the reasons for its dip in profits in Q3. Microsoft, meanwhile, is rapidly raising its capital investment globally as it ramps up spending on cloud infrastructure.

capital-intensity

Interestingly, Facebook as well is investing more heavily in infrastructure, with a capital intensity higher than any of the other companies in our set and now has wireless operator-like levels of capital intensity, remarkable for a company which operates a consumer online services business. But given Facebook’s recent focus on video, its servers now need to handle much more content and bandwidth than in the past. Apple and Amazon’s capital intensity has been fairly similar, at around 5%, though Apple’s jumped a little in the last two quarters.

As always, there’s far more to these companies than just these simple metrics and it’s worth diving deeper into their financial and operating data to pull out some of the additional detail. I have a larger set of charts on each of these companies and more of these comparative charts as part of my Quarterly Decks Service and Ben and I also discussed several tech companies’ earnings on this week’s podcast.

The Importance of the Quality Engineer

When a new product is introduced at a company event, it’s the executives, design engineers, and industrial designers that get the all of the credit. But behind every new product is engineers that focus on quality and reliability that rarely get much recognition.

Their role is not a glamorous job; it requires more discipline and less creativity than the designers. They often focus on the negative, trying to find problems with a product before it’s shipped. They’re also the ones that can require the designers to go back and redesign and delay a product’s introduction, subjecting themselves to lots of pressure.

Their job is to simulate the worst cases the product will experience before it’s shipped, taking on the role of the customer. They’re not the most popular members of the team; they are sometimes seen as a traffic cop. Design engineers focus on creativity and invention. The product is their baby and no parent wants to be told their baby is flawed or has a wart.

If a company’s executives or board wants to know how a new product is really doing, the quality engineers are the ones to ask. They have all of the statistics because their job is not only to be sure the product will last but to monitor how well the product is performing once it ships. They’ll tabulate the complaints, analyze the returns, and report back to the design team what needs to be improved.

Yet, in spite of their efforts to provide an objective account of a product’s performance before it ships, they’re occasionally overruled. Companies often ship products with the expectations of getting returns, based on a calculated decision. You just hope that doesn’t happen when safety is involved.

When Samsung shipped the Galaxy Note 7, both the initial and the redesigned versions, it’s hard to believe the quality engineers were happy and not overruled.

I’ve often thought how valuable it would be if we, as consumers, had access to the quality information before buying a new product. Our buying decisions would be much more informed.

We’d know the likelihood of a product needing to be repaired, see a list of the most frequent failures, and be able to make a clear comparison between competitive products before purchasing. But, of course, this information is closely guarded and rarely available. The best we can do is to consult things like Consumer Reports who tabulate experiences of their customers for a few category of products.

The next best alternative is to access customer reviews, surveys, and complaints from the web. While it’s impossible to determine the specific percentage of returns, there’s plenty of anecdotal evidence that can help. It’s easy to Google a problem we encounter and see if others have had similar problems.

A case in point. Almost a year ago, my daughter gifted me a Fitbit Charge as an incentive to be more active. But, after nine months, I’m on my fourth unit. The silicone rubber band on two units simply peeled apart, and a third unit had a defective battery that lasted less than a day. The products were never abused or used near water. While statistically surprising, I initially assumed I just had bad luck. When I mentioned this to others, two relatives and a friend, each of the three had to replace theirs several times. And, after searching through the company’s blogs and online reviews, I’ve found scores of users experiencing similar issues with many of the models.

Now, any product can expect to have a small percentage of defective units and, when you sell millions, the actual number of defects can be large. Typical return numbers for defects in the first year for a well-designed and manufactured product range from about half a percent to 2%. More than 3% is considered very high.

It’s hard to know what the true percentage of Fitbit defects are from this anecdotal evidence. Based on my experience, I bet it’s very high.

But considering the poor performance of the company’s stock in recent days, if I were a stock analyst, I’d want to understand how big a problem this is. Ask the quality engineers and you’ll get a better predictability of Fitbit’s fortunes than asking the CFO.

—————————
Fitbit offered this response when I asked them about their product quality issues:

The quality of Fitbit products and the health and safety of our customers are our top priorities. We conduct extensive testing and consult with top industry experts to develop stringent standards so that users can safely wear and enjoy our products. We also are committed to delivering a superior customer experience. We respond quickly when customers report issues and strive to work closely with them through our customer service channels to ensure their satisfaction. If consumers have any questions or concerns, they can contact us at help.fitbit.com.

While much of this may be true, their product reliability is not there yet.

Unpacked for Friday November 3rd, 2016

The On-Demand Fail – by Ben Bajarin

Just over a year ago, I shared some thoughts regarding the “On-Demand Economy”. In particular, why it works in China but may find struggles here in the US. I highlighted the US-based struggles in the form of two major problems:

First, US-based on-demand services lack two critical things which make many of these O2O services in China successful. They lack scale and they lack low-wage workers. The reason these services have a chance in China is because, in a city like Beijing, there are over 19 million people living in relatively close proximity. In Shanghai, there are over 22 million people. Several large tier 1 (meaning more developed and wealthier) cities have populations between 8-11 million people. China has an estimated 700+ million people in cities. Many of them living in developed cities and are considered part of the rising middle class, with higher disposable income. Contrast this with the United States where only 10 cities have populations of over 1 million people. Number one is New York with over 8 million, Los Angeles with over 3 million, and Chicago with just short of 3 million people. I make these points because on-demand startups like the ones I mentioned will require scale of close proximity urban living and this is something China has at much greater numbers than the US.

Second, China has low-wage workers. This is probably the most salient point about the contrast of the two on-demand economy markets. In China, you can not only have goods or services delivered in an hour or less but at costs only slightly above what it would be for you to go out and acquire the goods or services on your own. The economics work for not just the wealthy. Contrast this with the US where I know of a CEO of a large startup in San Francisco who pays $25 for a burrito once a week to have it delivered to his office from his favorite burrito joint. He could have walked down the street and paid $8 but instead wanted to stay in his office and work.

There has been some public news around funding rounds for on-demand startups like Door Dash, but this recent scoop on Instacart is interesting.

Instacart changed the terms of their payment structure and, it turns out, workers delivering for the company are earning less. The company claims this is necessary for the company’s continued growth. This is the lack of low-wage workers problem staring Instacart right in the face. They want to lower prices for delivery because the premium for on-demand services is way too high to ever go mainstream. To account for that, the on-demand companies were taking hits on their margins which is also unsustainable. This is the beginning of the likely downward spiral for US-based, on-demand startups as the entire system is unsustainable and prices simply can’t come down enough in this model to make it attractive to a significant part of the market.

It seems logical that there is a market for food and grocery delivery but it will likely be filled by Amazon or someone else with a better structured unit economics model. The idea is sound. The current execution by on-demand startups is not.

There is a Wider Opportunity than Large Enterprises for Microsoft Teams – by Carolina Milanesi

Last Wednesday in New York, Microsoft launched a new online chat application called Microsoft Teams. I attended a parallel event in San Francisco aimed at giving me the opportunity to demo Microsoft Teams at the end of a broadcast of the official launch event. The best way I have to explain Microsoft Teams – not Skype Teams as it was rumored leading up to the event – is it is a portal through which all team interactions happen. It is a web-based chat that adds to office 365 Enterprise and Business editions and will roll out in early 2017. It adds to current services like Skype and Yammer vs replacing their functionality, offering different options to users. While at first, Teams look very much like Slack, a few minutes playing with it shows the similarities are more on the look and feel of the portal than the actual functionalities. While you have teams and channels like you do in Slack, Microsoft Teams turns more into a hub where different tools plug into it from Microsoft and third parties that have access to open APIs.

As the presentation got under way and we moved to testimonials, it was clear who Microsoft’s target customers are: it is the large enterprises already invested, not just in Office, but in SharePoint, Power BI, Delve, OneNote, Planner, etc. It was also clear Microsoft was speaking to IT managers, not users. Microsoft Teams will be pushed down to users by IT which oftentimes sees even the most useful app met with resistance just because it is perceived as an imposition.

I believe Microsoft is missing two big opportunities: user push and education.

Letting individual users access Teams could have been a much stronger weapon against Slack vs. the IT mandate approach. Appealing to the user side and how work can be facilitated by a tool like Teams focusing on how it scales up but also how it scales down would have spoken to users more than IT managers. The only time there was a mention of users during the presentation was for trivial things like stickers and gifs. While it is true messaging has an entertaining component, I think it is patronizing to come across as implying all it takes for users to be happy is a sticker.

Education is another area where Microsoft is currently losing to Google and Apple and one that should be a priority when it comes to collaboration. It should be a priority for the short term opportunity of deployment but, more importantly, for the long term opportunity to hook millennials to the platform. Microsoft told Mary Jo Foley that Office 365 Education will eventually get Teams but I strongly believe the right way would have been to make it available in the Student Edition of Office. Once again, Microsoft is thinking top down rather than bottom up.

Overall the event seemed a bit of a contrast to the Devices event from the previous week where so much attention was given to individuals vs enterprises and creators vs employers. The focus was on empowerment of the individuals which ultimately is what Teams does but it did not come across as well in the positioning.

Fitbit and GoPro’s Results Highlight Challenges of Niche Hardware Companies – by Jan Dawson

I wrote a column last December about the challenge of being a one-trick pony in the consumer technology market. Two of the three companies I cited as examples in the article were Fitbit and GoPro. At the time, their businesses were generally doing well but the risks associated with being a niche hardware vendor have come home to roost at both companies since. Fitbit’s growth has slowed significantly, squeezing its margins, while GoPro has been shrinking markedly and losing money for four straight quarters.

Though each company has its own problems, they share most of them. They both provide hardware in categories with limited addressable markets – both fitness trackers and action cameras are niche propositions. In both cases, they also suffer from the combined effects of device abandonment and low upgrade rates among their bases. And both companies are facing low-end commoditization as well as encroachment from increasingly capable smartphones and other general purpose devices like smartwatches.

In both cases, the companies have dismissed concerns over saturating markets but it’s increasingly hard to ignore that these companies dominate their respective market segments, yet struggle to grow. The excuses for poor growth change every quarter, but the performance trends are remarkably consistent. Both companies have attempted to diversify into new areas – Fitbit into corporate wellness and GoPro into media. But neither company’s strategies are yet bearing meaningful fruit. Indeed, neither even breaks out revenue from these new categories in its financial reporting.

The future looks somewhat brighter for Fitbit, which continues to be profitable and is at least growing modestly. GoPro projects another year of losses in 2017, though also a return to growth. The challenge with GoPro is it’s missed its own guidance and analyst expectations so often recently, it’s hard to take its guidance seriously. The big question around both companies has to be whether they are best suited continuing to go it alone or whether they would do better as part of bigger consumer technology companies that could wrap ecosystems around them and leverage economies of scale and scope. Both are certainly considerably cheaper for potential acquirers at the end of the week than they were on Monday.

The Election’s Impact on Tech Regulation

The Obama presidency and the FCC, under Chairman Tom Wheeler, have been among the more activist and ambitious in recent memory. There have been some big victories — successful spectrum auctions, innovative spectrum sharing and 5G initiatives, the National Broadband Plan — and some acrimonious proposals, notably around network neutrality and cable set-top boxes. Justice has been a bit mercurial: opposing major consolidation in mobile and broadband (AT&T/T-Mobile, Sprint/T-Mobile, Comcast-Time Warner Cable), but allowing Charter’s acquisitions of TWC and Bright House and Comcast’s acquisition of NBC Universal.

In a few days, there will be a new President-elect and transition teams will begin strategizing for the post-January 20, 2017 world. What might be the impact of the election on comms and media regulation? I’ll start with a broad view and then drill down to a few of the more prominent items.

Of course, who is elected President will potentially have a significant bearing on tech. Hilary Clinton has a pretty detailed and well-articulated platform, with a particular emphasis on expanding broadband availability. She is likely to continue many of President Obama’s initiatives and priorities. If Secretary Clinton is elected, it is likely FCC Chairman Wheeler stays on until July or so. If she is true to form, expect FCC Commissioners and senior-level FCC staffing to take on a ‘FOC’ (Friends of Clinton, and by that I mean Hilary and Bill) flavor. On the other hand, President Clinton could signal intent to bridge gaps with the Republicans by ensuring a balanced FCC. The current FCC has three Democratic and two Republican commissioners, but the Democrats (especially Commissioner Rosenworcel) have not always been in lockstep with Chairman Wheeler.

If Donald Trump is elected, things are more of a wild card. To begin with, he has said little about this sector during the campaign and there isn’t much to glean from his policy platform. Trump is likely to be much more hands off than President Obama (who was very hands on). He will also be more pro-business and anti-regulation, which is why his off-the-cuff remark about the proposed AT&T-Time Warner deal was surprising. If he ends up being Delegator in Chief, his appointments could have an outsized influence.

What happens in terms of control of the House and Senate, as well as the overall post-election ‘tone’, will also be important. If the temperature remains highly acrimonious, there will be contention and delays in the naming and confirmation of senior staff. This could affect the process, prioritization, and timing of some big ticket items on the FCC’s docket. The FCC has a lot going on already. The AT&T-Time Warner deal will land at least partially on its plate, at minimum as an important litmus test for network neutrality.

Here is a quick run-down of some of the issues a new Administration is likely to face.

Network Neutrality — This is something President Obama strong-armed through the FCC. So far, the FCC been fairly hands-off in its application of NN. For example, allowing zero-rating services such as T-Mobile’s Binge On. The AT&T-Time Warner deal will be an important test, given AT&T’s current practice of zero-rating DTV content for AT&T subscribers and plans to do the same with the upcoming DirecTV NOW service. I think the FCC’s tone will continue to keep the application of NN at a high, “B to B” level. For instance, ensuring combined distribution and content companies (AT&T-Time Warner, Comcast-NBC Universal) do not discriminate against new media and OTT players (Netflix, Amazon).

Spectrum — There is a lot going on in the spectrum department right now. The new Administration is likely to inherit the 600 MHz auction – both its final rounds and its implementation. It’s a complex undertaking. Hot on the heels of that is the re-auctioning of DISH’s AWS licenses. The 3.5 GHz ‘shared spectrum’ (CBRS) initiative also has some important milestones hitting sooner rather than later, such as selecting and certifying the Administrators and coming up with an auction framework and other procedures. There is still some opposition to the FCC’s April CBRS Report and Order that the next FCC will have to address to keep this moving forward.

Another priority will be keeping the early momentum going on 5G. This is important from the standpoint of the US continuing its leadership in advanced wireless networks. A lot of innovation and work needs to happen to make the millimeter wave bands usable for commercial wireless services.

Broadband — The National Broadband Plan was one of the signature tech initiatives of the Obama presidency. Its implementation has been somewhat of a mixed bag. Broadband availability expanded and average speeds steadily improved. But there was also a lot of squandered money, the broadband market is not very competitive, and US average speeds are still very much middle of the pack.

A Clinton administration would be more likely to keep the broadband gravy train going. Fixed wireless/5G and deployment of large numbers of small cells will become a bigger part of broadband evolution over the next four years. The FCC’s Mobility Fund II, which is wireless’ version (and contribution) to the Universal Service Fund, is part of the equation, too.

Business Data Services — These are the FCC’s new ‘special access’ rules, which would impose price caps on what telecom companies can charge other companies or businesses for bulk data connections — often referred to as backhaul. The FCC is racing to get this done by the end of the year. BDS is important to the evolution of broadband and 5G, because bigger pipes are needed to deliver the capacity required by those services. Backhaul prices can be prohibitively expensive in un-competitive markets. If the FCC doesn’t vote on BDS by the Inauguration, its status could be in limbo and/or its proposed rules could be revisited.

Set-Top Boxes — The vote on this controversial Wheeler initiative to open up the set-top box market to competition has been delayed, after FCC commissioners could not come to an agreement. The FCC could try to get this done by the end of the year and before a new Administration takes power in January (although the expiration of Rosenworcel’s seat in December adds a tasty plot twist). But it is equally likely this will land in the new Chairman’s lap. The direction of the winter of 2017’s political winds, plus the new Administration’s having to deal with the AT&T-Time Warner deal, could affect the direction of this proposal.

M&A and Industry Consolidation — The approaching election has certainly been a factor in the accelerated pace of tech M&A activity during the second half of 2016. The new Administration will have to rather quickly deal with the proposed AT&T-Time Warner deal. This will be an important litmus test for future deals because it touches on many fractious issues that both the DOJ and the FCC will have to deal with more broadly: cross-ownership of assets, media consolidation, and the Internet’s impact on traditional distribution channels.

I also believe the wireless industry will revisit the consolidation issue early-ish in the new term. Sprint and T-Mobile could try again to get a deal done, or one of them could get acquired by a cable company. DISH is also a factor here.

Congress might be on hold between November and January, but the FCC still has a lot on its plate. From a tech industry perspective, Obama’s eight years were certainly active on the regulatory front, which no doubt angered those preferring a more hands-off public sector. But Obama’s initiatives, particularly with regard to broadband and spectrum, will be almost universally viewed as laudable. There is a lot—a lot—going on in our sector and, regardless of who wins on November 8, we will need some minimal level of government effectiveness to keep our fast-changing market moving forward.

The Great Tech Wall of China

It is hard to not talk about China in so many industry conversations today. China has emerged as one of the most significant markets for technology we have ever seen because of the sheer size of their consumer market but also because of their hunger for technology and the continued rise of economic power to spend on technology. For the better part of the last decade with PC OEMs, it was fairly easy to get into and compete in China. Global companies had stronger brands and therefore the Chinese market erred toward a known credible entity to spend their limited resources on vs. the risky white label brand. Fast forward to 2016 and the entire atmosphere has changed. China is so large and so lucrative, many former no-name brands established themselves as mainstream players in the Chinese market. Huawei, Xiaomi, Oppo, and Vivo are the main brands today whose financial and brand success has been entirely thanks to China and, in reality, still only in China.

In a number of recent technology studies that have come out of China, the brands I mentioned above have risen in popularity as well as provided a strong brand sentiment and aspiration in the region. It is no longer embarrassing to hold a local Chinese brand and several are beginning to rival Apple as far as aspirational purchases — but at more affordable prices. Recent evidence from local studies suggests even the younger demographic, who used to go to massive lengths to get an iPhone, are choosing a high quality, yet affordable brand like Oppo or Vivo and not aspiring to iPhones as much.

China’s local manufacturing boom is continuing to get better and better at producing quality consumer electronics and it is the fuel behind local brands starting to create massively appealing hardware designs at reasonable prices. This is clear in smartphones but is starting to bleed into all other categories of electronics in China. My conviction is, China’s consumer electronics scene will continue to be dominated by local brands. Part of this conviction comes from what we see and hear from foreign technology brands and the trouble they are having competing in China. The Chinese government is unquestionably starting to tighten the reins and control more of what happens in consumer electronics as well as who the key players are in the area. We have even heard rumblings they want their own state-controlled operating system to provide an alternative to iOS and Android. If there is any market where this could work, it would be in China and, if the state wills it, it is certainly a strong possibility.

It is this tightening of the nation state which is contributing to this great tech wall, making it increasingly difficult for foreign brands to enter China and compete. This wall may also be playing a key role in keeping Chinese companies from leaving China as well–at least for now.

China remains a market unto itself. It is big enough companies can have incredible fortunes and never leave the country. The unique environment that is providing success in China has yet to lead to any major Chinese OEM breaking out into a market outside of China or the surrounding SE Asia region. While a small percentage of Chinese OEM exports are making their way to India, it is the only other market with even a hint of success, but only in the extreme low-end. Chinese smartphone brand strategy here is to dramatically undercut the price of the locals and Samsung and hope for the best. This is a hard fought strategy and Chinese vendors fighting the razor thin margin game may find themselves out of business due to the lack of sustainability in this strategy. However, if they can succeed at establishing their brand, and this should be the ultimate goal, then it opens the door to other categories where, collectively, they hope to see better overall margins.

Chinese consumer tech companies are learning to scale in what is possibly the hardest market in the world to scale based on sheer size and costs. Chinese companies are not just perfecting their ability to scale in China but to also do so with small margins. This strategy is why they seem poised to meet the needs of a global population with aggressive price points. But we are still waiting for a Chinese brand, other than Huawei, to have any kind of significant volume outside of China and also at prices higher than $200 USD in volume.

My core thesis for Chinese consumer tech companies is they will follow a path like their predecessors from Asia — Japan and Korea. But the brand development and brand establishment strategies are still gaping holes in Chinese OEM strategies I’ve yet to see them take seriously enough. As I’ll share on my piece on Monday, these companies will live and die, not by the design of their technology products or the price at which they sell, but by how strong their brand becomes.