Samsung’s Galaxy Note7 Investigation becomes the Cornerstone for Improved QA

Samsung just hosted a press conference in Korea to share the findings of an investigation into what caused several Galaxy Note7 smartphones to catch fire. You can find all the details of the findings here but, in summary, there were two distinct battery issues, from two different manufacturers, that lead to the positive and negative electrodes to touch.

Getting to the root cause of the issue was paramount but what we learned from this process has ramifications, not only for Samsung, but for the industry because Lithium-ion batteries are not going away anytime soon. The actual investigation process Samsung went through over these past few months would have been quite difficult for a manufacturer without Samsung’s scale, capital, R&D facilities and work force. Dedicating 700 researchers to evaluate 200,000 smartphones and 30,000 batteries in a newly built testing facility is dedication.

Of course, a lot was on the line here for the world’s leading smartphone maker. Trust of both users and employees was at risk and winning that trust back was paramount.

Winning Back Trust that Samsung will Continue to Innovate

In early October, we at Creative Strategies conducted a study to assess the US smartphone market. Among the areas we wanted to evaluate was the impact, if any, the Galaxy Note7 incident had on the brand’s smartphone market. We were bullish then, and we are bullish now, that Samsung will recover from the Note7 recall. Only 28% of US Android owners said the Note7 caused them to have a more negative opinion of the Samsung brand. Numbers were even lower among Samsung owners.

Consumers are generally quite forgiving and have a relatively short memory. The car industry has seen several recalls over the years, yet consumers continue to buy. The mobile industry has also seen recalls but nothing to the extent of the Note7. Of course, what made the Note7 such a test case is how passionate its users are and how unwilling they were to give up their units, pushing Samsung and carriers the extra mile to get the phones back.

Samsung was quick to take responsibility and step into action. Communication is where the smartphone leader could have done with more clarity. Whether due to cultural differences in communication styles or due to having the complexity of bringing together the Consumer Product Safety Commission, carriers and retailers, Samsung’s messaging was not as direct as it could have been. Digital messages, however, were pretty clear, from warnings being displayed every time the phone was charged to limiting the charging capacity of the phone to ultimately bricking the phone.

Samsung, like any vendor in every sector that has ever had a recall, cannot promise its products will never again suffer from a malfunction. What can be done, however, it to show the necessary steps have been taken to limit the chance of that happening again.

What is even more important when we are talking about a market leader, especially one that has gained that position by adopting new technologies early, is to show their innovation streak will not be limited by fear. Samsung must show consumers they have set in place checks and balances that will allow them to continue to bring new technology, new designs, and new features into the market in a safe and effective way. The new 8-point battery safety check Samsung will implement going forward is an important step in recognizing that innovation should also come to QA, testing, safety and manufacturing processes.

A Market Leader Acting like a Leader

The fact that made the Note7 recall also unusual is the cause of the issue involved several parties: Samsung and two battery suppliers. While we do not know the names of the suppliers, it would be safe to believe they are not exclusive Samsung suppliers. The use of Lithium-ion batteries is also not limited to Samsung or these two suppliers.

Samsung’s President of Mobile Communications Business, Mr. DJ Koh, stated during the press conference that, during in the investigation, the researchers filed several patents in battery technology, patents that will be shared with the industry. We would need more details to understand the significance of these patents but this is the kind of action we would expect from a market leader, especially one that has a pretty substantial battery business.

Despite the many stories that broke on Friday about Samsung putting the blame on its suppliers, I did not hear that in the press conference. Although I am confident Samsung will require changes in the QA process implemented by its supplier, the focus of the messaging was centered on the changes Samsung will implement going forward, including the appointment of a battery advisory group. As much as there is skepticism around how two different suppliers could have two independent battery issues, I do not believe Samsung cut corners in bringing the Note7 to market. As the industry pushes more designs and features and as users push the capabilities of these devices, making sure all that can be done in a safe manner is paramount.

Innovation needs to involve all aspects of the production process and Samsung is making this point very clear. While adding steps to the process adds costs and time, I expect Samsung to be able to integrate the new steps without adding considerable development time or costs on to new products.

What is Next?

I had initially thought Samsung should move on from the Note franchise and deliver a different product with similar capabilities. After months on hearing countless airport announcements referring to the banned phone as the “Galaxy Note7”, “a Samsung phone”, “the Galaxy phone” and anything in between, I no longer think the Note8 would suffer as much as I initially thought. Better put, anything that will come after the Note7 will equally suffer whether it is related to it or not.

Samsung apologized, provided answers and solutions. What remains to be done is to make sure users who returned their Note7 receive the phone they want and a little extra love from Samsung. If indeed there will be a Note8 on the market in 2017, there is a lot Samsung can do to butter up those users from incentives on upgrades to limited editions to early access, etc.

While I can already read the headlines referring to the next Galaxy phone as “the one that hopefully will not blow up” or “not as hot as the Note7”, I am hoping we will move on — like most consumers will.

Moving Toward Our Augmented Future

This week, I attended the first-ever AR in Action conference at the MIT Media Lab in Cambridge, Massachusetts, where an extensive list of current (and likely future) tech luminaries talked about the past, present, and future of Augmented Reality. There are plenty of skeptics who doubt the viability of AR and Hollywood-produced visions of the technology set an awfully high bar. I’ve long felt AR will become a crucial technology; after spending time with this group, I’m even more convinced of this. It is not a matter of if, but when.

John Werner, known for the TEDxBeaconStreet events in Boston, orchestrated AR in Action so that the talks, panels, and demonstrations were all short and highly targeted. As a result, in the span of two days, I saw more than a dozen current and emerging use cases for AR, from both the academic and corporate worlds. There was much discussion about the potential ramifications of AR across numerous industries and there were many technology demonstrations. Finally, I had the opportunity to test out the segment’s hottest new hardware: the Meta 2 (it did not disappoint).

Frankly, the volume of information I absorbed will take weeks to process, but a few key takeaways follow.

Plenty of Companies are Already Testing AR
Last year I wrote about an IDC survey that showed US IT decision makers were already looking at AR for their business. One of the key platforms for commercial AR is Vuforia, which PTC acquired from Qualcomm in late 2015. During the conference, PTC’s CEO James Heppelmann talked about the intersection of the Internet of Things and AR and noted that PTC now has thousands of customers in active pilots of AR technology, primarily on smartphones and tablets. PTC also says more than 250,000 developers are using Vuforia. During day two, PTC’s Mike Campbell showed how to create a working piece of AR software—tied to real world object (in this case, a coffee maker)—in the span of about 15 minutes.

A Few Organizations are Moving Beyond Pilots
Patrick Ryan, the engineering manager at Newport News Shipbuilding, discussed the rollout of AR at his 130-year old company. At present, the firm has completed over 60 AR industrial projects and it is currently working on the rollout of permanent shipbuilding process changes using AR. On stage, Ryan showed a video and talked about using AR to facilitate such seemingly mundane tasks as painting a ship. Mind you, we’re talking about painting aircraft carriers. NNS workers are using AR on connected tablets to visualize parts of the ship before they’re completely built to eliminate errors and decrease waste during the process.

During the discussion on AR in museums, the panel—Giovanni Landi, Rus Gant, and Toshi Hoo—noted museums have been using augmented reality, in the form of audio guided tours, for decades. Many museums have begun experimenting with head-mounted AR to bring staid museum exhibits to life for visitors. Panel members noted that, as the prevalence of mobile AR increases, with more visitors walking in the door with capable devices, the opportunities to utilize the technology will also increase. One of the key ways they expect to use AR in the future is through the digitization of rare objects, which will allow museums to “show” a far larger number of items than can be physically displayed to the public.

AR is More Than Just Visual
Numerous speakers talked about the fact there are more ways to augment reality than through visual systems, including auditory and touch. Intel’s Christopher Croteau, GM of Head Worn Products, talked about his company’s product collaboration with Oakley. The Radar Pace is a set of glasses but there is no screen—all interactions occur through voice commands and audio feedback. The glasses, introduced in October, provide real-time feedback for runners and cyclists without visually distracting them. In addition to a spirited talk on the potential of AR technologies, Croteau also presented Intel’s forecast of the market stretching all the way to 2031. Like most forecasters, Intel sees the near-term opportunity for AR in the enterprise. But, by 2027, it predicts consumer shipments will move ahead of commercial and, four years later, the former will out ship the latter by a 4 to 1 margin.

The Right Interface is Crucial
There was a great deal of discussion about the challenges (and folly) of bringing legacy interaction models to AR but not a lot of consensus on what the right approach should be. One thing is clear: hand tracking and voice technology are both likely to play crucial roles but both have a long way to go before they’re ready for mainstream users. The panel on haptics was also enlightening, with executives from firms such as Ultrahaptics and Tactai discussing the critical role they expect touch to play as AR evolves.

More to Come, Exciting Times
The downside to an event like AR in Action is a person can only attend one track at a time (there were three running concurrently on both days). The upside is event organizers recorded everything, which means, hopefully in the near future, I will get a chance to watch all the tracks I couldn’t attend in person. Just as important, Werner made it clear this was just the first of what he expects to be many meetups of this kind, which I think is a good sign for this nascent but incredibly important market.

Unpacking This Week’s News – Friday, January 20th, 2017

Android One to Come to the US This Summer – by Carolina Milanesi

According to a report from The Information, Android One is supposedly coming to the US this summer. If you are not familiar with Android One, it was created by Google in 2014 as a software and hardware standard for Android in emerging markets. Initially focused on markets such as India and Pakistan, Android One was embraced by up and coming vendors such as Micromax, Spice and Karbonn. In 2015, Android One smartphones popped up in Africa and the Middle East as well. Google is testing all Android One phones to assure they deliver speed, long battery life and settings that help with data management. In the US, rumors have it that Google One smartphones could cost between $200 and $350 and that Google would be partnering with LG to bring the devices to market.

This is an interesting move as it might signal a slight change in focus for Google. With the most recent Nexus products and Google’s Pixel, it was clear Google was interested in capturing the higher end of the market. Pixel, however, is a departure from the co=branded Nexus approach and puts Google in more direct competition with its partners – Samsung in particular – for the most valuable segment of the consumer market. Products, such as the Nexus 5 in the past, did target more the mid-tier with prices around $450 but using a limited channel did not have the impact on overall sales both Google and LG were hoping for.

Moving down in price and partnering with more manufacturers for the US, a market critical for their expansion or survival of many vendors, might allow Google to get a “purer” Google, not just Android, experience in the hands of consumers.

If the future battlefield shifts from mobile to AI, overall data acquisition becomes more important and, with that, so does higher market penetration.

This would be a much different game than Nexus — a limited channel and one device at a time in the market. In order for Android One to make a difference to Google in the US, it will have to be a program that brings devices sold through direct and indirect channels. With new names such as LeEco, Honor, and BLU, there are many possible partners that could be willing to deliver hardware according to specs in exchange for marketing dollars and support. What these vendors will have to consider is how much their own ecosystem matters to them. Huawei might be willing to “sacrifice” the Honor brand to Android One because they have Huawei as their tier one brand. A name like LeEco, which prides itself on the overall experience, including content and ecosystem, might be more reticent.

For consumers, more affordable Android devices that will have a more certain software upgrade path and come with a degree of guaranteed quality and performance are certainly a good thing. That said, prices will have to be really aggressive to persuade first-time smartphone buyers or deliver greater value at an aggressive price in order to make them attractive to current smartphone users who, in the past, have not embraced products such as the Moto G in great numbers. The relatively limited success of Android One in emerging markets cannot necessarily be used as a measure of the potential for the US market as Google services in those markets are not as important to consumers and there are plenty of lower-end Android devices that offer what consumers want.

Tesla Cleared of Fault in NHTSA Crash Probe – By Bob O’Donnell

The National Highway Traffic Safety Administration (NHTSA) announced on Thursday that Tesla was cleared of wrongdoing that could have led to a recall of their Model S cars with Autopilot software. The six month long investigation was triggered by the well-publicized death of a Model S driver whose car slammed at full speed into a tractor trailer truck that was crossing the highway in front of it.

The somewhat surprising verdict relieves Tesla of a potential cloud hanging over it but additional commentary from NHTSA suggests the topic of autonomous and semi-autonomous driving will be closely watched for some time. In particular, there are still questions about the definitions of autopilot vs. semi-autonomous driving vs. fully autonomous driving. The lingering concern voiced by NHTSA officials, as well as other car makers and industry observers, is most consumers don’t understand the differences nor the implications for what each of those terms mean, particularly with regard to the amount of attention that is required of the driver.

NHTSA acknowledged the Tesla owner’s manual does point out the Autopilot features on a Model S are semi-autonomous and require that the driver maintain complete control of the car even when Autopilot is engaged. However, the NHTSA report also pointed out many consumers don’t necessarily read those manuals, so the necessary information is “not as specific as it could be.” In fact, I’d be willing to bet that, if you polled Tesla owners—let alone the general driving public—about what autopilot features can do, very few would say they require the driver to maintain complete control.

The crux of the dilemma is that, once drivers do give up control of a vehicle, studies find it can take 15 seconds or more to really recognize a situation and re-take control. The NHTSA report suggested the driver in this incident had at least 7 seconds with which to respond because the Autopilot functioned in the manner in which it was supposed to (hence, the lack of a recall) — yet the driver didn’t respond. Unfortunately, we’ll never know exactly what happened in this particular case but legitimate concerns about semi-autonomous driving capabilities still exist in my mind, despite the NHTSA ruling.

On a big picture level, there are clearly potential safety benefits from some of the semi-autonomous driving features Tesla has implemented on the Model S. Indeed, the NHTSA report even cited a reduction in crashes because of those features. Nevertheless, because of the confusion around what Autopilot and semi-autonomous driving really means, I’m afraid this won’t be the last we’ll be reading about this issue.

Apple Updates Logic Pro X and GarageBand for iOS – by Jan Dawson

One of the criticisms of Apple which has become loudest lately is it is increasingly ignoring the professional creatives who use Macs to do their work. This theme has two parts – the first is Apple hasn’t done enough to update the Mac lineup to serve these professionals. The focus here is on the new MacBook Pro which doesn’t have as much memory as some pro users would like (though plenty for most tasks) and on the fact Apple hasn’t updated some of its desktop Macs in several years. The other theme is software. Some have said Apple hasn’t updated its professional apps for creatives frequently enough. On both fronts, the allegation is Apple is neglecting pro users and perhaps will no longer attempt to serve them, choosing instead to focus on mainstream users.

This week’s announcement of an upgrade to Logic Pro X, and an accompanying upgrade to GarageBand for iOS, should help address that second concern around software. Apple had already provided a big update to Final Cut Pro and the associated apps last fall in connection with the new MacBook Pro with Touch Bar. Now it’s upgraded its big professional audio app in a similar way. That rounds out the major apps for professionals with big upgrades, which have generally been well received by the base, and should help neutralize some of the criticism of Apple in this area.

But this week’s announcements also suggests Apple no longer sees the Mac as its only platform for creatives – the GarageBand upgrade offers integration with Logic Pro X and will allow creatives to do work on their iPads which can then be imported back into a Mac workflow that leverages Logic Pro. A narrow focus on the Mac as the hardware and professional apps as the software for creative professionals misses the fact many professionals also use iPads and iPhones for work in at least certain circumstances and workflows need to incorporate these devices too. For workflows outside the remit of Apple’s own professional apps, it’s partnered with IBM, Deloitte, and others to achieve similar objectives, combining Apple’s various devices and making clear that even those running iOS can be used for “serious work”.

None of this, of course, will do anything to assuage those hardware concerns about the Mac lineup. On the other hand, if Apple does update the Mac Pro in the first few months of this year, that will help address the biggest criticisms on the hardware side and perhaps help to tone down the overall rhetoric about Apple abandoning creative professionals in search of the mainstream user. In the meantime, the MacBook Pro, the iMac, and even older Macs will continue to support many professionals perfectly adequately, especially when paired with these upgraded pro apps.

Apple Looks to Make iPhones in India – by Ben Bajarin

Rumors had been swirling for some time that Apple would make a move, with their manufacturing partners, to start to make portions of the iPhone in India. As a part of an economic stimulus, India sets regulations on foreign companies who want to compete in the country. For hardware companies, the regulation applies to 30% of its components locally. Bloomberg reported yesterday that talks had advanced but Apple is looking to negotiate on elements of the tax and duty structure.

India is a key market for Apple from a growth standpoint, but it is also not China as I’ve explained many times before. That being said, the despite no official retail presence and price points which seem out of reach of most Indian consumers, the iPhone is the second most owned smartphone brand in India. Even though their market share is small, the market is immensely diverse.

From a study we did at Creative Strategies toward the end of 2016, we discovered that Apple has a strong brand in India with consumers ranking the company as the global leader when it comes to smartphones. But we also discovered that Indian consumers were not solely driven by price alone as many have come to believe. While 30% of Indian consumers did list the iPhones cost as a primary reason for not purchasing one, a 25.2% of the consumers listed lack of customer care centers and 23.8% of the consumers listed challenges around downloading (really sideloading) of localized content and apps. When we looked at some potential catalysts which could spur more consumers to buy an iPhone, we were surprised again to find it was not lowering the price as the dominant factor but more tight integration and bundling of local services and content. All in all we feel there is ground for Apple to gain in India if they play their cards right.

While we must be wise about the extreme differences between China and India, India is the seventh largest retail market in the world and over the next ten years could jump up to number 3 or 4. Undoubtedly, a key market for any global player which is why it makes sense for Apple to jump through hoops with Indian regulators to compete there but to do so in a way that protects their interests as well.

Two Possible Futures for Amazon’s Alexa

Amazon’s Alexa voice assistant was clearly the star of CES this year. No single consumer electronics device dominated coverage but lots of individual devices incorporated Alexa as their voice assistant of choice. The announcements ranged from Echo clones to home robots to cars and smartphones. It was clear Amazon had entirely captured the market for voice platforms. Only one or two integrations of the Google Assistant were announced and those are both future rather than present integrations.

It would be easy off the back of all this to say Amazon had won the voice assistant battle once and for all but I actually see two possible futures for Alexa, with very different outcomes for Amazon and its many partners.

Future 1: Amazon continues to dominate

The first possible future for Alexa is one where the current trends mostly continue and even accelerate. Amazon’s own Alexa-based products continue to sell well, with Dot probably taking a greater share of sales going forward relative to Echo (or Tap), selling into the tens of millions of installed units in the next couple of years. On top of that, the adoption by third parties that was so evident at CES continues, with even more devices offering integration. Importantly, Alexa starts to make an appearance in Android smartphones, making it as pervasive and ubiquitous as existing smartphone-based assistants, possibly even making an appearance in another round of Amazon smartphones.

What we end up with in this scenario is a massive ecosystem of devices which all offer users access to Alexa and its functionality. These devices perform their functions well, recognizing voice commands effectively, responding appropriately, and adding value to users’ lives. Because they’re all part of the same ecosystem, they work the same way — commands issued through one are reflected on the others. Amazon benefits from owning a massive new user interface and platform which can be used not just to push its e-commerce sales but to take an increasingly large share of media and content consumption across video, music, audiobooks, and more.

This scenario also assumes major competitors either don’t launch competing products or those products fail to take off. Google has, of course, already launched its Home device but, thus far, sales are far lower than Amazon’s and are handicapped by a lack of awareness and the lack of a major e-commerce channel. The Google Assistant, meanwhile, should be the default option for Android OEMs in all their devices but the way in which Google has held it back as it promotes its own hardware has also held it back, perhaps fatally, as a third party voice platform. If that doesn’t change, if Microsoft’s Windows-based Cortana strategy falls short, and Apple’s reticence to participate in this market continues, Amazon dominates with its devices and those third party products using its ecosystem.

Future 2: Cracks start to appear in the Alexa ecosystem

The secrets of Echo’s success

I want, though, to paint an alternative future for Alexa, one which is less rosy and more complex. Amazon’s genius in launching the Echo and Alexa was to pick a blank slate rather than an existing category for its experiments with voice. Instead of competing with another smartphone-based voice assistant, Amazon chose to compete in the home, with its relative quiet and better internet connectivity, and a device that was optimized for specific use cases: fantastic voice recognition and great audio output, even from across the room. That had two major advantages: first, it wasn’t going head to head with powerful entrenched competitors and second, it could deliver far better performance around voice recognition than smartphone-based systems.

The Echo performs fantastically well at what it does. Its voice recognition is indeed very good, inviting highly favorable comparisons to Siri and the like. It’s this success in providing great voice experiences that have propelled sales of its own devices and prompted other companies to build their own as well. The assumption on all sides is it’s Alexa that powers these phenomenal experiences and the Alexa Voice Service for device makers will power similar experiences on other devices.

Amazon’s limited control over Alexa devices

However, one look at Amazon’s guidelines for those wishing to incorporate Alexa Voice Service into their devices should prompt at least some skepticism. Echo and Echo Dot famously have a 7-mic array built into the top of the device, with beam forming, enhanced noise cancelation, and more helping to ensure the device does a phenomenal job of picking up your voice from up to 20 feet away. But look at the minimum specs for Alexa-powered devices and you quickly realize many of these devices won’t match up on hardware – the minimum standard for microphones is just one and additional technologies like noise reduction, acoustic echo cancellation, and beam forming are entirely optional.

Also optional is “wake word” support – in other words, the always-listening function that waits to hear Alexa (or another word of the user’s choice) and then springs into life. The Amazon Tap doesn’t offer this feature (and was hammered in reviews for it) because the “across the room” use case is a key part of Echo’s appeal. Even when a wake word is supported, Amazon only requires a minimum of one microphone for near-field recognition and just two for far-field (20 foot) recognition.

Where this second future scenario diverges from the first is the sheer range of Alexa-enabled hardware starts to put many devices into the market that don’t have nearly the appeal of Amazon’s own. Lenovo’s Echo clones appear to be using an 8-mic array and may very well perform at exactly the same level or better but the Huawei Mate 9 smartphone, which is due to incorporate Alexa later this year, has just 4 microphones and the device obviously wasn’t built with optimal voice recognition in mind. In a rush to get products to market, we’ll see many vendors putting out devices with the bare minimum specs and prominent Alexa-related branding.

All it will take at this point is a handful of terrible reviews for Alexa-powered speakers and other devices and the Alexa brand will quickly become tarnished. At that point, Amazon’s admirable openness with the Alexa tools may come to be seen as a huge mistake because it’s set so few limits on what can be done with the service and its brand. Even if third parties are committed to providing the best possible experience, voice recognition on a smartphone or other smaller devices likely is never going to match up to the Echo’s quality, which means the true Alexa experience will likely remain elusive outside of the home. Any perceived quality advantage will, therefore, fade as well, making Alexa a lot less appealing.

More compelling competitors

Meanwhile, competitors will move past their slow start in responding to the Echo and Alexa and will begin producing more compelling alternatives. I see no reason why competitors shouldn’t be able to build devices which perform at least as well, in terms of voice recognition, as Echo given the same parameters (home use, large devices, mic arrays designed for voice recognition). Indeed, Google’s Home has already demonstrated there’s no special magic there. In addition, players like Google and Apple have one huge advantage – they already own massive installed bases of hundreds of millions of devices running their operating systems and integrated voice assistants.

Google’s early misstep in limiting the Google Assistant to its own devices will be overcome in the next few months as it makes it available to Android OEMs more broadly and, at that point, its Home device will become a lot more compelling. Apple, too, has the potential to do really interesting things in the home speaker space should it choose to do so, given the increasing scope and availability of Siri and its AirPlay audio and video casting technology. Again, the appeal of using the same assistant everywhere, tightly integrated into devices, will be a big advantage over Amazon’s looser Alexa ecosystem.

Which future plays out?

On balance, I’m inclined to think the future will look rather more like the second scenario I’ve painted than the first. That is to say, I think Amazon’s advantages in the field of voice assistants are mostly temporary and, to some extent, illusory. Competitors will catch up fast in the home and exceed its capabilities outside it. That doesn’t mean Amazon can’t build a decent business with a more limited scope of opportunity around its first party devices and a handful of really compelling third party devices in an ecosystem but I suspect its future will be a lot less bright than its present in this space.

What “Hidden Figures” Can Teach Us about AI

This weekend, I finally watched Hidden Figures. I took my 9-year-old daughter with me to witness how instrumental women of color were to the success of several NASA missions — something that historically has been associated with white male achievement. If you have not seen it yet I highly recommend it. The acting is superb and the story offers so much education, both on race relations and women in the workplace. What I want to focus on is possibly something the director and the cast never imagined could matter. I do, not because it is the most important aspect but simply because it is very relevant to the tech transition we are experiencing right now.

All the talk surrounding artificial intelligence is as much about the technology itself as it is the impact its adoption will have on different aspects of our lives. Business models in the automotive industry, insurance business, public transportation, search and advertising as well as more personal consequences such as human to human interaction, sources of knowledge, and education. Change will not come overnight but we better be prepared because it will come.

New Tech Requires New Skills

Change came in 1962 for the segregated West Area Computer Division of Langley Research Center in Virginia where the three women who are the main protagonists of the story worked. Mathematician Katherine Goble and de facto supervisor Dorothy Vaughan are both directly affected by new tech rolling into the facility in the form of the IBM 7090. If you are not familiar with the IBM 7090 (I was not before this weekend), it was the third member of the IBM 700/7000 series of computers designed for large-scale scientific and technological applications. In layman terms, the 7090 would be able to perform in a blink of an eye all the calculations that took the computer division hours. Dorothy understood the threat and, armed with her wit and a book on programming languages, was able to help program the IBM 7090, taught her team to do the same, shifted their skills and saved their jobs.

I realize part of this story might be for the benefit of the screenplay and the world is much more complicated. However, I do think that what is at the core is very relevant — the creation of new skill sets.

Although AI has the potential to affect not only manual jobs that can be automated but also, theoretically, jobs that require learning and decision making, the immediate threat is certainly on the former.

We focus a lot, and rightly so, on the job loss AI will cause but we have not yet started to focus on teaching new skills so such losses can be limited. As I said, AI will not magically appear overnight but we would be fools to think we have plenty of time to create the skills our “augmented” world will require. From new programming languages to new branches of law and insurance, Q&A testing and more. Empowering people with new skills will be key not only to having a job but also keeping our income at pace with the higher cost these new worlds will entail. Providing a framework for education is a political responsibility as well as a corporate one.

Who Will We Trust?

The IBM 7090 replaces Katherine when it comes to checking calculations but, just as Friendship 7 is ready to launch, some discrepancies arise in the electronic calculations for the capsule’s recovery coordinates. Astronaut John Glenn asks the director of the Space Task Group to have Katherine recheck the numbers. When Katherine confirms the coordinates, Glenn thanks the director saying: “You know, you cannot trust something you cannot look in the eyes.”

I don’t know if Glenn actually said that or if it is a screenplay liberty but, when I heard it, I immediately thought of AI. Who will consumers trust? Many think AI is not going to be any different than it has been with any prior technology but I believe such thinking undermines where AI could actually take us. Autonomous cars are the scenario we most often refer to. We might trust the car to park itself or to alert us if a car is in our blind spot. We might even try a semi-autonomous setting on an empty motorway. But are we ready to trust the car and take out eyes off the road and our hands off the wheel? How will brands earn our trust? Will it be the number of accidents they are involved in? The assurance that, in case of an accident, their computers are programmed to save whoever is in the car?

What if we changed scenarios and talked about a medical diagnosis. Today, we tend to pick our doctors and specialists based on our insurance’s recommendation, a friend recommendation or even the comments on Yelp. Bed manners, courteous receptionists, short wait times all play a role. But, for anything more serious, what it all boils down to is the track record of right diagnosis and saving lives. Will we trust a machine alone? Or will we still want a doctor, who we can look in the eyes, coupled with the machine? A recent White House report mentioned by Fortune talks about the idea of linking human and machine. While they do so as part of the discussion of job losses, I think the formula also applies to our human nature of building trust with another human being.

The same issue of trust will also apply to other scenarios where not our life but our privacy and security could potentially be in danger. Here too, trust will matter. Who do we trust with our digital assistant, with our home automation? When life is not at risk, at least not directly, I feel consumers will show more flexibility, especially when the full implications are not grasped and convenience and possibly price are what matters the most.

In both cases, though, I strongly believe AI will drive consumers to consider more than technology alone and look for traits in brands that have been more traditionally associated with humans: honesty, empathy, loyalty, and service.

Lenovo’s Yogabook could be the Perfect Design for Future Tablets and Smartphones

Over the Christmas holidays, I began testing a new product from Lenovo call the Yoga Book. I tested both the Android and Windows OS version and, while testing these two-sided tablets, I began to realize this design is probably one of the most innovative mobile products I have seen in the past 5 years.

Here are some pictures of the Lenovo YogaBook:


As you can see, both sides of this tablet have a screen. One side is the video display while the other is a glass screen used as a virtual keyboard and writing surface. It is remarkably slim and uses “watch band” hinges to tie these two screens together. It is also very light and very versatile.

I am not sure if you know this but the original iPhone design came out of an actual tablet project Apple did about three years after the iPod was released. When the engineers showed Jobs this prototype tablet, he asked if it could be done with a smaller screen and in some type of smartphone form factor. Thus, the iPhone was born.

As I used the Lenovo Yoga Book, I began wondering the same thing Steve Jobs did when he was shown the original tablet design. The Yoga Book has a great display and the second screen adds a lot of extra user interface touches that make it more versatile. Although I am still not proficient using the virtual keyboard, I have become appreciative of the role of the second glass display. Now, imagine if that design was shrunk to the size of perhaps a 5.5-inch smartphone and it actually had three screens on it.

In folded mode, it would look and act like a normal smartphone. But, when you open it up, it has two other displays that, when laid out, doubles the surface area of thr smartphone and turns it into a 8 or 9-inch tablet.

This would take serious engineering chops to create but it could be the kind of design that delivers the next big thing and drive mobility to a new level.

Apparently, I am not the only one who thinks this is an interesting idea. Recently, Microsoft filed a patent for a foldable smartphone that, although it looks like it only has two screens, they clearly have the idea of a folding smartphone in mind.

Samsung has a similar concept in a smartphone called the Galaxy X. It has what appears to be three screens and is foldable. Some reports say this model could be on the market by the end of 2017.

What appeals to me the most about this design concept is that a smartphone could actually double as a tablet. Today, I take with me both an iPhone and an iPad as each is used for different functions. The good news is the OS is the same across both and, with Apple’s Continuity in place, working with and between both form factors allows me to have more versatility in the way I work and play.

While this concept is highly speculative on my part, I am convinced that, if it could be thin enough to be a solid smartphone and, when unfolded, it could be a great tablet too, it could become the design that delivers the innovation the market needs to explode again and drive all types of new use cases.

Q4 2016 Earnings Season Preview

We’re about to head into earnings season once again, with Netflix reporting this Wednesday and most of the big tech companies reporting next week and the week after. With that in mind, here’s a preview of what I’m expecting and what I think we should be looking for as each of the big companies report.

Alphabet

With Alphabet, I’m looking for two big things: further signs of improving the financial performance in the Other Bets segment following the streamlining that’s been going on there, and any indicators of how Google’s new hardware has performed. Operating losses were down year on year in the Other Bets segment for the first time in Q3 2016 so we may see another quarter of narrowing losses. From a hardware perspective, any revenues will be reported under Google’s “Other” segment, which houses all non-advertising revenue from the Google business, including Google Play app and content sales, enterprise services sales, and various other revenue streams. That’s already a fairly big chunk of revenue in total – $2.4 billion in Q3 – but even a million or so Pixel sales would generate around $600-700m in new revenue for this segment so the growth here should be noticeable. Beyond these two items, I’m also curious to see whether all the growth in ad revenues at Google continues to come from its own sites rather than third party sites – that trend has been evident for a while now.

Amazon

Based on a couple of press releases of holiday sales Amazon put out earlier this month, it looks like it had a very healthy fourth quarter. I would guess e-commerce sales were up 20-30% while AWS likely also continued to grow revenue and profits strongly. Amazon is starting to look unstoppable on the e-commerce side, capturing almost all the growth in e-commerce while expanding into new verticals and even opening more physical retail stores. The hardest thing with Amazon is simply not knowing in which quarters it will choose to reinvest profits versus passing them on to shareholders – just as it had investors trained to expect higher margins following many years of breaking even, it produced lower profits again last quarter and its guidance for Q4 was ridiculously broad. But, on a long-term basis, Amazon looks to continue its stellar run.

Apple

Apple promised modest year on year growth this quarter in its guidance but it has an extra week of sales this quarter compared with the same quarter last year. It has made some analysts assume the growth is based on that extra week rather than underlying trends. Apple tried to push back against that idea on its earnings call and has all but said it expects the underlying business to grow in the coming months, so that’s the biggest single thing to look for, not just this quarter but for the coming year. iPhone sales are obviously by far the single biggest factor – if there was year on year growth in shipments, that should help drive overall growth. But if there wasn’t, that reinforces the narrative that has emerged over the past year that Apple will struggle to grow going forward as it has in the past. Recovering Mac sales, driven by the new MacBook Pro, as well as strong holiday sales of the various versions of the Apple Watch should help too, as will the record App Store revenue Apple touted in a press release earlier this month.

Facebook

Facebook is another of those companies, like Amazon, which seems to have been on a tear lately with no end in sight. But it’s also been providing stronger indications on its recent earnings calls that ad load is near saturation and will cease to be a major factor in driving revenue growth going forward. I’ve done some analysis on this point and concluded that, although this will lead to slower growth going forward, it will still probably see healthy growth relative to most of these other companies. It will shortly launch video ads in its Stories feature in Instagram, which offers a new venue for advertising and will help with overall ad load even as ad load is becoming maxed out elsewhere. But in addition, the growing user base, rising prices per ad, and other factors should still help drive high growth. I’m guessing management will also be asked about WhatsApp monetization on the call – Facebook suggested that was coming at its F8 conference last year (and there have been various hints of how this might happen) but we haven’t seen many details yet. In theory, this could be a useful additional source of ad revenue growth — but WhatsApp’s management has eschewed advertising as a business model.

Microsoft

Microsoft finally seems to be coming out of a long period of declining revenues and returning to growth, with essentially flat revenues year on year last quarter. It helps that the phone business is now so small that even big year on year percentage declines don’t put too big a dent in dollar revenues. It’s also passed the one-year anniversary of the Windows 10 launch, which introduced deferred revenue and artificially depressed reported revenue. However, Surface revenue is likely to have been flat or down slightly in the quarter, with no big portable launch and only the relatively niche Surface Studio launching in the quarter, compared with previous year-end launches. The other major growth drivers should remain healthy, however. Microsoft’s various cloud business have been growing strongly, though the exact components can be hard to pick apart in Microsoft’s complex reporting structure. Negative PC market growth continues to be one of the biggest headwinds and, although there was slightly slower decline in the overall market in Q4, it was still down year on year, which will have a knock-on effect on Microsoft.

Netflix

Netflix is a business that has to be seen in two parts with very different dynamics. The domestic business is massive and highly profitable but has recently seen significantly slowing growth, driven in part by increased saturation and in part by the recent price increases. Netflix forecasted a pretty healthy quarter of growth for the US business, so we’ll have to see how accurate that forecast is – it has struggled recently to get these numbers right. The international business, on the other hand, is catching up to the domestic side in terms of its total size thanks to its rapid growth (on the basis of the current run-rate, it’ll likely become the larger of the two later this year), but has been unprofitable. The loss per subscriber has been narrowing even as the total number of subscribers grows, so there’s a path to profitability here, and a number of individual markets are already profitable. But with massive investments in original content – $6 billion in 2017 – Netflix has to ensure its growth continues to keep revenues ahead of costs and start to turn a profit in its international business.

Samsung

Samsung has already released a surprisingly upbeat preliminary set of quarterly results but we’ll have to wait a few weeks to see the details. Revenue looks very healthy overall, apparently driven by strong performance in smartphones and semiconductors. The former is a surprise, given the Note7 recall and its impact, at least some of which fell into Q4, and which has had at least some effect on people’s perceptions of the Samsung brand, but a strong quarter here will help reassure investors the damage is limited. If semiconductors did contribute to the strong overall results, that will also be a positive sign because that segment has performed unpredictably over recent quarters after being an important growth driver for several years prior. Semiconductors has the highest margin of any Samsung division, so its performance is particularly important to overall performance.

Twitter

Twitter badly needs to show its recent efforts to improve user growth are working. Judging by the number of nagging emails from Twitter I received at my various accounts in late December, I’m guessing it’s doing everything possible to goose monthly active user numbers (which are based on the last month in the quarter) but, of course, it’s this kind of thing that makes MAU such a poor measure of true user patterns. Twitter continues to refuse to provide daily active user numbers which suggest these would reflect poorly on the service. I’m expecting some modest growth in MAUs to be reported but those should be taken with a pinch of salt for the reasons I’ve just outlined. The other big problem Twitter has faced recently is its average revenue per US user hasn’t been growing nearly as well as in the past, so this is worth watching this quarter too. Engagement numbers have also been all over the place and Twitter needs to start showing more consistent trends here as well. I would expect the various live video efforts to be a big focus on the earnings call – the big question here is whether any of this is generating either better user growth or meaningful new ad revenue.

Inside the Mind of a Hacker

Writing about security is kind of like writing about insurance. As a responsible adult, you know it’s something you should do every now then, but deep down, you’re really worried that many readers won’t make it past the second sentence. (I hope you’re still here.)

Having recently had the privilege of moderating a panel entitled “Inside the Mind of a Hacker” at the CyberSecurity Forum event that occurred as part of CES, however, I’ve decided it’s time. The panel was loaded with four smart and opinionated security professionals who hotly debated a variety of topics related to security and hacking.

Speaking to the theme of the panel, it became immediately clear that the motivations for the “bad guy” hackers (there was, of course, a brief, but strong show of support for the white hat “good” hackers) are exactly what you’d expect them to be: money, politics, pride, power and revenge.

Beyond some of the basics, however, I was surprised to hear the amount of dissent on the topics discussed, even by those with some impressive credentials (including work at the NSA, managing cyber intelligence for Fortune 500 companies and government agencies, etc.). One particularly interesting point, for example, highlighted that hackers are people too—meaning, they make mistakes. In fact, thankfully, apparently quite a lot of them. While in retrospect that seems rather obvious, given the aura of invincibility commonly attributed to hackers through popular media, it wasn’t something I expected to hear.

Another key point was the methodology used by most hackers. Most agreed that the top threat is from phishing attacks, where employees at a company or individuals at home are lured into opening an attachment or clicking on a link that triggers a series of, well, unfortunate events. Even with up-to-date anti-malware software and security-enhanced browsers, virtually everyone (and every company) is vulnerable to these increasingly sophisticated and tricky attacks. However, several panelists pointed out that too much attention is spent trying to remedy the bad situations created by phishing attacks, instead of educating people about how to avoid them in the first place.

Looking forward, the rapid growth of ransomware, when companies or individuals are locked out of their systems and/or data until a ransom is paid to unlock it, was one of the panelists’ biggest concerns. Attacks of this sort are growing quickly and most believe the problem will get much worse in 2017. In many cases, organized crime is behind these types of incidents, and with the popularity of demanding payment in bitcoin or other payment methods that are nearly impossible to trace, the issue is very challenging.

Another concern the panel tackled was security issues for Internet of Things (IoT) devices. Many companies getting involved with IoT have little to no security experience or knowledge and that’s led to some gaping security holes that automated hacking tools are quick to find and exploit. Thankfully, the group agreed there is some progress happening here with newer IoT devices, but given the wide range of products already in market, this problem will be with us for some time. One potential solution that was discussed was the idea of an IoT security standard (along the lines of a UL approval), which is a topic I wrote about several months back. (See “It’s Time for an IoT Security Standard”)[pullquote]There are few if any things that can be completely blocked from hacking efforts, but huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available.”[/pullquote]

Another potential benefit could come from improved implementations of biometric authentication, such as fingerprint and iris scans, as well as leveraging what are commonly called “hardware roots of trust.” Essentially, this provides a kind of digital ID that can be used to verify the authenticity of a device, just as biometrics can help verify the authenticity of an individual. Both of these concepts enable more active use of multi-factor authentication, which can greatly strengthen security efforts when combined with encryption, stronger security software perimeters, and other common sense guidelines.

As the panel was quick to point out, there are few if any things that can be completely blocked from hacking efforts. Nevertheless, huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available. Instead of worrying about solving the toughest corner cases, good security needs to start with the basics and build from there.

Why Tech Leaders can’t Succumb to a Presidential Bully Pulpit

Merriam-Webster defines a “bully pulpit” as:

Bully pulpit comes from the 26th U.S. President, Theodore Roosevelt, who observed that the White House was a bully pulpit. For Roosevelt, bully was an adjective meaning “excellent” or “first-rate”—not the noun bully (“a blustering, browbeating person”) that’s so common today. Roosevelt understood the modern presidency’s power of persuasion and recognized that it gave the incumbent the opportunity to exhort, instruct, or inspire. He took full advantage of his bully pulpit, speaking out about the danger of monopolies, the nation’s growing role as a world power, and other issues important to him. Since the 1970s, bully pulpit has been used as a term for an office—especially a political office—that provides one with the opportunity to share one’s views.

Roosevelt’s use of this term as an adjective and not a noun made the bully pulpit term OK for the time and, if the person using that pulpit for good, the term can be an endearing one. However, I am not sure we can see President-Elect Trump in that light yet, given his history of “blustering and browbeating” people to get his way.

I took a call from a reporter last week who was asking me about Apple’s decision to have their servers in a single data center location instead of at each of the major data centers they have around the US and the world. This will be done in Arizona and the reporter asked if Apple did this to help get a better position, in Trump’s eyes, by doing the manufacturing in the US. All told, it will only add 10-20 jobs and I told the reporter this was more strategic and had nothing to do with wanting to gain favor with Trump.

But other companies, such as Ford and Carrier, have made decisions to move jobs from planned facilities outside of the US back to America. On the surface, it does appear Trump “bullied” them into doing it. It seems very clear to me that Jack Ma, CEO of Alibaba, who met with Trump at Trump Tower and pledged to bring one million jobs to the US, had being in Trump’s good graces in mind.

Last week, Amazon announced they would add 100,000 jobs in the US. When this was announced, and because of Trump’s bully pulpit, I was asked by reporters if this decision was because of pressure from Trump or something more related to strategic growth.

I would hope it was because it was a strategic decision but I have a sneaky feeling Amazon and many others do not want to rile Trump. What he says and does from his “bully pulpit” could hurt them during his time in office. Let’s be clear: I am 100% behind creating more jobs in the US but I believe this should come as result of great business conditions, innovation, a true need for these companies, and that it is strategic to their business growth. I also believe they should not be doing it because they were bullied into it. I am of the school that believes bullying them to create jobs may be a temporary fix. Unless it’s done with the right motive, conditions, and strategy, it will not deliver the fundamental change needed for these jobs to be long lasting.

I believe strongly the tech industry and companies should not succumb to the bullying tactics of President-Elect Trump in any way when it comes to the issue of strategic planning, growth, innovation, and even jobs.

That does not mean they should not want to work with him and, when necessary, lobby to influence Mr. Trump’s policies so he and his administration do not stand in the way of growing our tech economy. But, if any of their moves are done just to placate Trump, then they are building foundations that will crumble under the weight of forced motivations. Unless strategic to their growth, it will set them back, not move them forward.

In a recent piece I did for Fast Company, I outlined my involvement with a council of independent tech influencers that helped shape President Bush’s tech agenda. In the article, I suggested some of the types of councils I believe President Trump needs to help him understand tech and, more importantly, use them to help develop a tech agenda of his own that would benefit his economic goals and get these companies to help support an agenda that moves our industry forward.

I believe working with President Trump in a civil, proactive manner should be the goal of every tech company but not kowtowing to him because he bullied them into some action. The tech industry needs the resolve to stand up against any bully pulpit and only do what is right for them to grow their market. Anything less than that won’t have a lasting impact on them or our industry.

Unpacking This Week’s News – Friday, January 13th, 2017

HTC Introduces new High-End Smartphone Family – By Carolina Milanesi
On Thursday, HTC announced two new smartphones: the HTC U Ultra and HTC U Play. Positioned as the new flagship, the U line adds to vs replaces the One line. HTC is clearly using these two products to reposition the brand and is using every buzzword and feature of the moment to do so: all-glass design, second screen for notification a la LG V20, no audio jack a la Apple (and Moto and LeEco), HTC Sense Companion (which is, of course, labelled as AI), and most importantly, being all about U, the user. The marketing team has had some fun with the press release, although I am not sure if it will help you get all the specs.

HTC has always had great products, both from a quality and design perspective. Sadly, over the past few years, that has not been enough to save it from declining market share, especially in markets such as the US where it used to be a strong number three player. This is because good design and quality are no longer enough to get consumers’ attention and money. Brand matters and HTC over the years lost its sense of self. When HTC transitioned from an OEM to an ODM, it was regarded as the trusted brand of the professional user. Rather than capitalizing on that at the moment when professional users were getting the smartphone they wanted in the enterprise, HTC decided it needed to be cool for a younger segment which lead it to look at other opportunities such as Facebook and what became the HTC First. As the brand failed to win any favors among millennials, marketing went full steam ahead with the big Robert Downey Jr. investment. Sadly, another failure. From marketing, the focus shifted to services; also not something consumers saw as a differentiator. Finally, they scaled back on everything but the hardware just over a year ago delivering a much cleaner experience.

With the U, I feel they are bringing everything back in one line: the good, the bad, and the ugly. The good remains the hardware. Sense Companion sounds like an assistant not even worthy of the name, coupled with the exact AI-washing I mentioned in my article this week. And the ugly is certainly the marketing.

Catchy lines will not sell phones, especially phones priced at $750. Such an early announcement for products that will not be selling until after Mobile World Congress leads me to believe there will be more to see from HTC before these devices even hit the stores. I was hoping HTC was taking time to regroup and come up with a sustainable, long-term plan, especially after delivering Pixel which is so highly regarded. I was mistaken.

PC Shipments Stumble but Turnaround is Closer – By Bob O’Donnell
The PC market faced another challenging year in 2016, as both IDC and Gartner reported the fifth annual decline for the category in separate news releases this week. The numbers vary between the two firms because of how they count Chromebooks and convertibles, but the tally is likely between 260 and 270 million—a 100 million unit drop from the peak of 365 million units in 2011.

On the surface, this looks quite bad and, for many smaller players in the industry, it is. But it’s a different story for the largest PC providers. Market leaders Lenovo, HP and Dell accounted for just under 60% of the total in Q4 of 2016, up from about 56% just a year ago. All three companies experienced year-over-year growth in the final quarter. HP and Dell also enjoyed year-over-year growth for the entire year. Apple and many smaller companies on the other hand, including Acer, Asus, and the “whitebox” market all faced declines, some of which were rather large.

Another interesting part of the story is that average selling prices for PCs (at least in the US) have been rising. What this strongly suggests is that, while the total number of people who want and are still buying PCs may be declining, their interest in and willingness to pay for PCs is growing. Given that total Q4 numbers were down just 1.5% year-over-year, according to IDC, this could certainly suggest a flattening and restructuring of the market. Moving forward, we’re likely to see a more rationalized (i.e., smaller) PC market, but one that features nicer machines from a smaller set of suppliers.

This view certainly seems to fit all the renewed interest and energy the PC market has been experiencing lately. Between growing interest in higher-end gaming PCs equipped with high-quality GPUs from nVidia and AMD, along with growing interest in convertibles, 2-in-1s and other interesting form factors, and a potentially interesting renewed CPU performance battle between Intel and the reinvigorated AMD (thanks to their forthcoming Ryzen CPU), there are actually quite a few interesting developments happening in the PC market in 2017. Throw in some attention-grabbing new PCs from numerous vendors at the recent CES show and a clearly established Microsoft Surface hardware business, and there’s actually quite a bit to be excited about in the PC market, even in spite of these tough sales numbers.

Amazon to Create More Than 100,000 New Jobs across the U.S. over the Next 18 Months – by Jan Dawson
This is just the latest in a series of announcements from major tech companies (not to mention car companies and others) about job creation in the US in the run-up to the inauguration of Donald Trump as US President in a week. The timing here is no coincidence – Amazon was a favorite target of Donald Trump during the election campaign and clearly feels the need to defend its US job-creating credentials more acutely at the moment.

What Amazon has promised is to create 100,000 new jobs in the US in the next 18 months, on top of the 180,000 employees it already has in the US. Amazon claims these jobs will run the gamut from entry level first jobs all the way up to highly skilled engineers working on cloud services, AI, and more.

It’s worth putting the numbers in context a bit. Amazon has created around 135,000 new jobs globally over the last 18 months and I estimate its US-based employees are currently around 57% of its total workforce, a number that’s held relatively constant over the last few years. Amazon has already been accelerating its hiring in the last couple of years, adding far more each year than the year before. So we can reasonably assume the 135,000 it added globally in the last 18 months would have grown substantially anyway, perhaps to 200,000 new jobs globally in the next 18 months.

In that context, 100,000 new jobs in the US seems very much in keeping with Amazon’s existing trajectory, rather than a dramatic increase or shift in focus from overseas back to the US. As such, the press release may well have been timed to coincide with the new administration but it doesn’t appear to reflect any change in existing policy at all. That’s not to say it’s not impressive that Amazon is employing this many new people in the US or that it isn’t already a massive employer here. But it should, in theory, take the wind out of the sails of anyone claiming this is a result of the recent election.

It’s also worth noting that, though Amazon touted the sheer range of these jobs, the vast majority will be low-wage fulfillment center jobs and not the high-paid engineers Amazon employs in much smaller numbers. These warehouse jobs are the key to Amazon’s continuing growth and they scale very much in line with Amazon’s overall global scale. Those jobs have also been criticized in the past for poor working conditions, so there’s a potential downside to go with the upside here as well.

Nintendo and the Switch – Ben Bajarin
Nintendo may have found its groove again. As of late, the company seems to be staying true to their values of quality and on premium gaming experiences with an unwavering commitment to maintaining their brand’s value. Nintendo could have done many things pundits said they should have done, but many of those things would have come at the cost of either commoditization or de-valued the premium experience consumers expect with Nintendo products.

It is true the Wii U was a dud, hardcore Nintendo fans, including me, were intrigued with the game pad’s display and use as an alternate screen. However, developers did not utilize its full potential, so the Wii U just ended up being a glorified Wii in the end. The Switch looks to learn from prior lessons as Nintendo has created what can best be called a hybrid gaming system. One that is both mobile and handheld, but also can be plugged into the TV allowing for the Joy Con motion controllers which make up the side handles and buttons of the mobile screen act separate from the main unit and act as the motion controllers when the device is plugged into a TV.

As mobile CPU and GPUs have gotten better, the idea of having a full game console quality experience on a mobile device. Nintendo is looking to capitalize on that vision with what is best described as a GameBoy + Wii in a single device. And at $300 US dollars, I expect the Switch to be a compelling product.

They are also looking to add a paid subscription to play online, something similar to XBOX Live from what I can tell.

Game consoles are not going away, yet, for a various number of reasons. They will remain niche, as they have hovered around 23-25% of WW consumers saying they own a game console. That number has not shrunk or grown much but has stayed relatively flat for many years. Which gives us the impression, that is about the size of the global game console market. At least for the foreseeable future. Not huge, but not small, and Nintendo plays a key role in that market.

I’m excited to get the Switch and share my experience and observations.

Why Disney Should Buy Netflix

I’m in the smaller camp of folks who don’t think Apple should by Netflix. I’m not going to dive into why (perhaps another time if anyone really cares) but there is little doubt Netflix will need a cash infusion that won’t likely come from the public market or by subscriber revenues alone. In my mind, the logical acquirer is Disney.

Why Does Netflix Need to Be Bought?
One of the things that has become glaringly clear as we analyzed both Netflix and Amazon’s original content strategy is just how capital intensive content production is, particularly movies and TV (music much less so). We hear about the $200 million dollar production costs of major motion pictures and remember Hollywood is a hit driven business. The only reason their model works is because they are good at creating one big hit a year and, if they are lucky, several. This is one reason we see studios take less risk and focus more on sequels and tried and true brands. They are simply less risky and Hollywood does not like risk. It’s one of the several reasons it will likely get disrupted. The main point is, original content is very expensive.

Out of that context comes Netflix and Amazon, the two companies who are best positioned to give Hollywood a serious run for its money. It will not be too long before Netflix and Amazon start producing big budget movie brands, exclusive to subscribers. As I articulated in my Netflix and Story as a Service article, the subscription model for storytelling is the one I think has the most upside. But this is very expensive.

Amazon has multiple businesses that will allow them to fund the costs of their media empire ventures. This is what makes them a very worrisome competitor for media and entertainment companies. Between the public market, massive margins in AWS, their e-commerce strategy, Prime subscriber growth, and more, they can afford to pour money into their media empire in ways many competitors can not.

Netflix does not have this luxury. They are a one trick pony. Their ability to re-invest profits will be more constrained than Amazon because they have to rely on the public market to invest in their potential. For now, that has been stable but one or two missteps could spell disaster. Netflix needs deep pockets and, of all the companies I can think of, Disney is the right fit.

Disney Likes to Shake Things Up
Disney has already been pushing the limits on distribution windows and they have the shortest window from theater to DVD of any studio, on average. We used to do a lot of consulting in the media and entertainment industries with companies like Universal, Walden Ventures, Lions Gate, and more, and the topic of Disney’s aggressiveness to shorten the distribution window caused many to worry. I think Netflix fits beautifully into what we have observed Disney do with distribution.

Imagine being a Netflix subscriber and having Disney movies on-demand within 60-90 days. Given the assets they have, I can imagine the idea of the full catalog of Disney movies and shortened distribution windows could drive significant subscriber growth and loyalty all by itself. Furthermore, and this is perhaps the most interesting strategic element of this, Disney would acquire consumer data around media watching behaviors and what is hot and trending that is tremendously useful to their production strategy. Amazon is so disruptive to retail because they have much better data on consumer shopping behavior and interests so they know what to promote, recommend, and what to slap their own brand on and white label. Similarly, Disney would get a leg up on their competition by seeing what kinds of content is popular and using that to their advantage as their storytellers look to create the next new franchise.

As I debated this in my head, there are certainly counterpoints to this proposal. The strongest is that, if Disney owned Netflix, it could impact Netflix’s ability to license content from other studios. Hollywood, like politics, is remarkably tribal and I can see other studios either pulling their content from Netflix, not licensing further content, and/or raising their price to Disney. The neutrality of Netflix is a current advantage with studios that will be gone if owned by Disney.

My second startup was a failed attempt, during the dotcom era, to bring technology to artists and labels to benefit to their fans. I learned very quickly that Hollywood is run on fear and greed. That is one of the things that makes the tech industry raging mad when trying to negotiate with them. Ultimately, I have strong hunch Netflix is going to need some help to take on the big boys. Disney makes the most sense in my opinion, unless Amazon can step in and consolidate it. I expect much change and turmoil to come as consumer behavior changes and their content demands increasingly challenge the establishment’s ability to adapt.

Why I’m Optimistic About the Future of Cars

I spent last week at CES and a couple of days this week at the North American International Auto Show in Detroit. In both cases, I spent a lot of time listening and talking to carmakers and others in the industry. What I’ve come away with from these two weeks is a lot of optimism about the future of cars for several different reasons.

Both the industry and outsiders are pushing change

The biggest reason I’m positive about the future is both the legacy industry players and newcomers and outsiders are pushing for change. There’s nothing more frustrating than seeing an industry where lots of good ideas are coming from outside and they’re all being squashed by the incumbents – we’ve seen this happen in the music industry, the PC industry, and we’re still seeing it in the TV industry. Though there has been some resistance in the past to the big shifts facing the automotive industry, almost all the major carmakers are accepting of the new realities and, in many cases, actively embracing the three big shifts: electrification, autonomous driving, and new ownership models.

The carmakers are actually engaging in their own efforts around autonomous driving and car and ride sharing. In the vast majority of cases, they’re also embracing electrification as one of several powertrain technologies. None of this is to say these companies will end up owning all of this themselves – at the very least, the disrupters from outside the industry and newcomers like Tesla have pushed the incumbents to innovate faster and they may well end up owning some of the end result too. But I heard from company after company about their investments and experiments in a variety of car and ride sharing models, even in urban mobility projects which don’t involve cars at all, such as bike and bus programs.

There is realism about challenges, at least behind closed doors

At both the shows I’ve attended in the last two weeks, there have been lots of high profile proclamations about the glorious future we’re all headed to, many of them with specific timelines attached. Looking at the headlines that result from these statements, it’s easy to despair at a lack of realism from many of the companies involved. Claims about fully autonomous vehicles rolling off production lines as soon as 2021 seem absurd on the face of them but, when you dig beneath the surface and talk to the actual engineers behind the technologies, you get a sense of nuance that’s often missing from those public proclamations.

What I found this week in particular was the carmakers are incredibly realistic about the very real challenges involved in bringing autonomous vehicles to market. There is definitely a headline-grabbing push to establish leadership in electrification and autonomous driving but those actually working on the technologies will tell you about all the complexities and challenges that exist. The real plans of the major carmakers around these topics are far more realistic about the actual timelines, which are much further out than the headlines would lead you to believe, at least for full-time Level 5 autonomous driving without geographic limits. When it comes to electrification, there are also far more sanguine views about the effect of current low gas prices on demand for EVs, the need for more charging infrastructure, and the limits of current battery technology. That realism is a good thing, because it means that, even as these companies embrace change, they’re going to do it in a way that prioritizes safety and the customer experience.

The future looks exciting

At the end of the day, I’m most optimistic because the future of cars looks generally very positive. Tesla has already shown us both the enormous potential for high-performance electric cars and for limited autonomous driving. I used Uber and Lyft extensively over the last two weeks and those services, and many others around the world, demonstrate the potential for far lower car ownership and more flexible mobility models. What I saw at NAIAS this week also reassured me we’re going to get great technology from the incumbent carmakers when it comes to all three of the major shifts, including increasingly high-performance electric and hybrid vehicles and assisted driving technology that helps pave the way for future autonomous driving technologies. We, and especially our children, are going to be able to drive (or be driven by) cars which are much safer, more comfortable, more connected, and better for the environment than the ones we drive today. The competition between the legacy industry and a whole variety of new players is pushing both sides to move faster in delivering that reality. That’s going to be good for all of us.

A Potential Downside of Self-Driving Cars

One of the great things about self-driving automobiles is they should be much safer than ones driven by humans today.

Last August, Gary Shapiro, the CEO of the Consumer Technology Association, told the Wall Street Journal:

“Each year, more than 30,000 Americans die and many more are injured in car accidents, the vast majority of which are caused by human error. Driverless cars could eliminate 90% of these deaths and injuries.

If you read about the safety goals of self-driving cars, you will know this is one of their greatest features. As the technology matures and more data is collected during the next 3-4 years via the millions of miles self-driving test vehicles will collect, I suspect, by the time these cars hit the road officially in the 2020-2022 time frame, they will have the kind of sensors, cameras and AI-based instructions in them that will deliver on the promise of a higher level of driving safety.

As I have surveyed and studied the landscape for autonomous vehicles and marveled at its potential, I am concerned about one downside I see as these cars become safer and reduce traffic deaths significantly.

A member of our family was recently the recipient of a dual lung transplant from a person who died in a car crash and donated their lungs to this person. Although the rules behind this donation will never allow us to know who donated the organs, we do know it came from a victim of a fatal crash. As we waited at the hospital for the delivery of these new lungs so they could be transplanted, we were highly conflicted. We were very concerned for the family of the person who donated the lungs and the fact their loved one had just died. But we were also incredibly grateful they had an organ donation card so that, upon their death, the lungs could be used to save our family member’s life.

That lung transplant took place last June and I am glad to say the lungs have been accepted by the family member’s system and they are on the way to recovery. They still have a tough road ahead to get back to full health but, without the transplant, they would have died. Being close to this issue has made me realize how important the organ transplant program is and if you have not designated your organs for donation should something happen to you, I encourage you to do this as I personally understand how much this can impact the lives of the person and the family that receives them.

But if autonomous vehicles do reduce automobile deaths by 90%, this could have a dramatic and serious impact on the organ donor program. An article in Slate spells out the unintended consequences of fewer deaths due to safer driving with autonomous vehicles:

“Since the first successful recorded kidney transplant in 1954, organ transplant centers have been facing critical shortages. Roughly 6,500 Americans die waiting for an organ transplant each year, and another 4,000 are removed from the waiting list because they are deemed too sick for a transplant. Since 1999, the waiting list has nearly doubled from 65,313 to more than 123,000. Liver and kidney disease kill more people than breast cancer or prostate cancer, and the Centers for Disease Control and Prevention expects the incidence of these chronic diseases to rise along with the need for more organs.

It’s morbid, but the truth is that due to limitations on who can contribute transplants, among the most reliable sources for healthy organs and tissues are the more than 35,000 people killed each year on American roads (a number that, after years of falling mortality rates, has recently been trending upward). Currently, 1 in 5 organ donations comes from the victim of a vehicular accident.”

In the case of the person who donated their lungs to our family member, they also donated a heart, kidneys, liver and corneas so several people benefited from this tragic death. What they did is amazing and admirable even though it happened via their death.

As the Slate article stated, only 1 in 5 organs come from the victim of a vehicular accident so there are other ways for organs to become available but deaths from auto accidents are the most reliable source for these organ donations.

Given the number of people on the waiting lists for organ transplants and the safety features of autonomous vehicles, it appears this waiting list may get longer. The irony is this is one of those real life good news/bad news issues related to the impact of technological advancements. Everyone wants to reduce traffic deaths and, as Mr. Shapiro of CTA states, “Driverless cars could reduce traffic deaths by 90%.” On the other hand, it will also reduce the number of organs available for transplant recipients that could save their lives. Given my personal experience with organ transplants, I am afraid I remain conflicted even though I am a major proponent of technological advancements. But in our world, technology advancements are critical to economic growth and it impacts people in so many ways even if sometimes it has unintended consequences.

Move Over IoT, AI is the New Hot Acronym

I survived CES 2017 to tell this story. From the very first day of press conferences and CES Unveiled, it was obvious connected-anytime-anywhere was what we were going to see at the show. It soon became clear that, if we played the drinking game for every time AI was mentioned by a vendor, we would not be sober for very long. Press conference after press conference, pitch after pitch, Artificial Intelligence was mentioned as a key trend for 2017 and one vendors were working on. Although in most cases, there was nothing concrete to see linked to the product they were announcing.

Everything Is Connected Even When It Should Not

A quick Google search pins the origins of the Internet of Things to Peter T. Lewis back in 1985 when he gave a speech at a FCC-supported conference where he defined IoT as “The integration of people, processes and technology with connectable devices and sensors to enable remote monitoring, status, manipulation and evaluation of trends of such devices.”

Over the past two to three years, IoT has started to materialize as more and more devices were connected to the internet. How this is developing might be different than how vendors intended it a few years ago. If we rewind a little, after smartphones, many thought wearables were going to be the most pervasive devices, making humans the most connected ‘things’ of the Internet of Things. As it turned out, wearables penetration is ramping up very slowly and it seems the focus of the vendors and the attention of the consumers have both shifted to connecting our homes.

What gets connected, however, is questionable. Not everything that is connected should be. Sometimes, even if a device is connected with good reason, the value of that connection is not immediately clear to the customer. In 2013, the HAPIFork took CES by storm. The connected fork let users know when they ate too much or too fast and was one of the most talked about device of the show. In 2017, one of the most talked about devices at CES was the L’Oréal Smart Brush. Developed with Withings, the Kérastase Hair Coach uses sensors and a microphone to count hair strokes and listen for hair breakage. Aside from the brush, I saw connected showers promising to keep your water warm (and save you water and money as well), windows that can sense when the air is getting too stuffy and open on their own, smart locks, voice activated garbage cans and so much more. There was so much connected “stuff” that I started to mentally file gadgets into three categories — Tech for the sake of tech, tech for the sake of lazy, and tech for the sake of humanity.

Sadly, I saw more gadgets and solutions falling into the first two categories than into the last. There were many products searching for a problem to solve and many that offered to replicate something we already do today with just less effort on our part. What was interesting, however, was the common underlying selling point was the smartness, not the connectivity.

IoT Is No Longer Cool for Consumers

I came away from CES with the clear impression that, although we are still talking about the Internet of Things, vendors, and more importantly PR gurus, have moved on from the connectivity part to the brain part of the devices.

While enterprise is still very much talking about IoT as it looks to empower, manage and, most of all, monetize all these devices, it was as if the term was tired when it came to consumers. It could also be for consumers, being connected is a given nowadays and it is the value of connectivity that needs to be highlighted in order to drive a premium. Unfortunately, not everything that is connected is necessarily smart and not everything that is smart is necessarily intelligent. As I talked to people, I noticed the line between these three concepts was very blurred and, in most cases, the blurring was quite intentional.

For me, a smart device is not only a device that is connected to the internet and/or other devices but one that can interact with other devices and enable some degree of autonomy. In order to be smart, a device does not necessarily need Artificial Intelligence, at least if you think of AI in terms of a device that mimics “cognitive” functions that humans associate with other human brain such as “learning” and “problem solving. This does not make the device less smart or less useful. To give you an example, let’s look at the iPhone 7 Plus camera experience. The iPhone 7 Plus takes great pictures thanks to the two lenses, the software, the sensors, and the processing power. AI or, more specifically, Machine Learning, only comes in to deliver the portrait effect by recognizing where the subject of the picture ends and where the background begins. Think about maps as another example. The Estimated Time of Arrival (ETA) we are given when we set a destination comes from a series of data from average speeds, actual travel time, traffic prediction, speed limits, and historical averages. These are all combined to get a projection of your ETA between two points. AI is what makes it possible for my phone to analyze my commute pattern and, as I connect to the car at a given time of day, offer me the ETA to the most likely location — my daughter’s school in the morning, the office after that or the Karate Dojo in the afternoon.

The Risk of AI-Washing

Internet of Things, Artificial Intelligence, Machine Learning are all trends that will develop over time, not over-night. While the temptation of keeping things fresh might get vendors to chase the next buzzwords, I think it is very risky to do so. Talking prematurely about features and capabilities might just bore consumers sooner rather than excite them. Talk about what your products can deliver. Better yet, showing what they deliver is more effective than labelling features with sexy buzzwords. Don’t tell your users your device uses AI, show them what your device can actually do and, if it looks like a bit of magic, let them think this is how you do it. Sometimes, talking too much about what is under the hood might raise more questions than you have answers for — especially when it comes to security and privacy – something we should all concern ourselves with as everything gets connected and smart around us.

Takeaways from CES 2017

By now you’ve undoubtedly read or viewed several different CES stories across a wide range of publications and media sites. So, there’s no need to rehash the details about all the cool, crazy, or just plain interesting new products that were introduced at or around this year’s show.

But it usually takes a few days to think through the potential impact of what these announcements mean from a big picture perspective. Having spent time doing that, here are some thoughts.

The impact of technology on nearly all aspects of our lives continues to grow. Yes, I realize that seems somewhat obvious, but to actually see or at least read about the enormous range of products and services on display at this year’s show makes what is typically just a conceptual observation, very real. From food to sleep to shelter to work to entertainment (of all kinds!) to health to transportation and beyond, it’s nearly impossible to imagine an activity that humans engage in that wasn’t somehow addressed at this year’s show. Having attended approximately half of the 50 CES shows that have now occurred, the expanding breadth of the show never ceases to amaze me. In a related way, the range of companies that are now participating in some way, shape, or form is surprisingly diverse (and will only increase over time).

Software is essential, but hardware still matters. At the end of the day, it’s the experience with a product that ultimately determines its success or failure. However, when you’re surrounded by the products and services that will drive the tech industry’s agenda for the next 12 months, it’s immediately clear that hardware plays an enormously critical role. From subtle distinctions like the look and feel of materials, to the introduction of entirely new types of tech products, the importance of hardware devices and key hardware components continues to grow, not shrink (as some have suggested).

What’s old can be new again. Though TVs and PCs may sound like products from a different era to some, this year’s show once again proved that the right technological developments combined with human ingenuity can produce some very compelling new products. Even long-forgotten technologies like front projection can be transformed in ways that make them very intriguing once again. Plus, it’s becoming increasingly clear that, just like the fashion and music industries, the tech industry is developing a love affair with retro trends. From vinyl to Game Boys and beyond, it seems many types of older tech are going to be revisited and renewed.

We are on the cusp of some of the biggest changes in technology that we’ve seen in some time. The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry. Oh, and it’ll make for some incredibly useful and compelling new product experiences too.[pullquote]The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry.”[/pullquote]

Voice-control will certainly be part of this, but there will be much more. In fact, the range of new products and services, as well as enhancements and recreations of existing products and services that AI, deep learning, and other advanced types of software technologies can enable in combination with sensors, connectivity, and powerful distributing computing is going to be transformational. Sure, there’s been talk of adding intelligence to everything for quite some time, but many of the announcements from this year’s CES demonstrate that this promise is now becoming real.

Finally, trade shows still matter, even in tech. Yes, virtual reality may one day provide us with the freedom to avoid the crowds, hassles, and frustrations of trekking to an alternative location, and seemingly everyone who goes likes to complain about attending CES, but there’s nothing quite like being there. From serendipitous run-ins with industry contacts, to seeing how others react to products and technologies you find interesting, there are lots of reasons why it’s going to be difficult to completely virtualize a trade show for some time to come.

Evaluating the iPhone’s Impact

This week marks the ten year anniversary of the announcement of the iPhone by Steve Jobs. There’s understandably been lots of reminiscing about the launch event itself, some of it rueful in relation to more recent Apple launches, but I think the most interesting thing about the iPhone launch is to think about the impact it’s had on the broader consumer technology industry.

The impact on smartphones

You’ve probably seen one of the many “before and after” pictures out there which show what smartphones looked like before the iPhone and what they came to look like afterward. The most obvious impact the iPhone had was on what smartphones in general were like after its launch. A few before-style smartphones still launched after (and BlackBerry arguably pursued mostly that model for years), but almost all smartphone makers suddenly realized the iPhone was the one people actually wanted. But it goes much further than that – the iPhone taught regular people why they’d even want a smartphone in the first place and, in the process, mainstreamed the smartphone.

Prior to the iPhone, smartphones were largely used by two classes of people. In North America, in particular, but also beyond, they were mostly used by business people and were email-centric. BlackBerry and various devices based on the Windows Mobile platform dominated this part of the market. In Europe and some other markets, however, a different type of smartphone dominated, much more consumer and media-centric, with Nokia one of the largest vendors. Though both were popular among certain segments, neither had mass-market appeal and both visions of the smartphone were limited relative to what the iPhone would become. Once launched, the iPhone arguably absorbed both those use cases in one device, though it took a year or two to get really good at business email.

The iPhone ultimately represented a different vision of what a smartphone should be – the internet in your pocket, though that was only one of the three main value propositions Steve Jobs outlined at the event. Beyond that, the iPhone would become a computer in your pocket, capable of many of the same things as your computer at your desk, but highly portable and much more fun to use. Almost every smartphone since has sought to emulate that fundamental value proposition, though it took years for competitors to match the execution. It’s impossible to know what smartphones would have been like in the absence of the iPhone – they surely would have evolved in some of the same directions over time – but it’s certain the iPhone dramatically changed the market. Whether you use an iPhone or another smartphone today, you can thank Steve Jobs and the iPhone for almost all of its functionality.

Creation of new markets

Even beyond the smartphone market, though, the iPhone has had far-reaching impact, especially following the release of the App Store a year after the initial launch. Though the value proposition of the essentially unchangeable iPhone of 2007 was already strong, what really transformed it was the App Store and all the additional value that came with it. Certainly, there were lots of existing websites which now had easy to use versions encapsulated in apps on the iPhone. But it went far further – thousands of completely new ideas found form as apps in the App Store and it’s arguable that the iPhone helped launch many companies that have become enormous in their own right. Neither Uber nor Tinder would exist today without the iPhone-driven modern conception of the smartphone. It’s also likely Facebook and Twitter would be a shadow of their current selves in the absence of the iPhone and the innovation in smartphones it drove.

Beyond apps, there are also huge new hardware categories that have emerged in recent years that were enabled by the iPhone and all that came after. Wearables are the single biggest of those, with Fitbit and other fitness device vendors benefiting enormously from easy Bluetooth LE-enabled syncing with iPhones and other smartphones. The Apple Watch also benefited from this tie-in to the iPhone. A huge variety of other smart devices, however, would also have been almost impossible without the iPhone and other iPhone-like smartphones: smart lighting, smart locks, smart scales, smart fridges, and all the other technologies we’ve seen over the last several years which rely on app controls.

And then there are the plethora of other devices which have borrowed smartphone technology and components, starting with Apple’s own iPad and an array of other touchscreen devices. Almost every small electronic device released in recent years also benefits from iPhone-driven innovation in screens, radios, chips, sensors, and miniaturization in general. Everything from drones to AR and VR technology to smart earbuds benefits from the massive scale and innovation driven by the iPhone and all the smartphones it spawned.

The most transformative product of the last 20 years

Ultimately, the iPhone is the most transformative product of at least the last 20 years, pioneering a whole new category of devices but then sowing the seeds of both new device categories and service and app businesses too. It didn’t do that all itself – many other device vendors have innovated in very meaningful ways in this market as well – but the presence of the iPhone is what prompted a massive paradigm shift by every other vendor and fomented a burgeoning of innovation across the smartphone market and many other categories. Almost none of the technology we use today is unaffected by the innovations introduced ten years ago this week.

The Unintended Consequences of a Single Design Decision

Being involved in the design and development of consumer products, I’ve seen how a single design decision can have huge unintended consequences and change an entire industry — for the better or the worse.

As a positive example, when Apple decided to design notebooks using aluminum housings and abandon the industry’s use of plastic with ugly vents and screws, they created a huge industry of automated machining of solid aluminum blocks. That industry has now made it possible for other notebooks to use the same processes to create their own products.

Another example is when Apple decided that thinness was a major goal for its mobile products. The unintended consequences have had a huge impact, likely beyond the original intention, but one that’s impacted performance, features, user satisfaction, and the entire industry.

Some of those consequences are:

Shorter battery life – Making phones and notebooks as thin as possible and then making them even thinner in each subsequent generation resulted in less volume for batteries. But because the one dimension that reduces a battery’s capacity most is its thickness, battery life of iPhones and MacBooks have suffered. Battery life of iPhones and the latest line of MacBook Pros are well below expectations and are one of the major user complaints. So much so, the battery indicator no longer displays time left. And, since a battery’s life is based on the number of charging cycles, smaller batteries need more recharging cycles, resulting in a shorter life.

Fragility – The thinness of iPhones has resulted with the iPhone 6 and 6 Plus actually bending in normal use and the need for protective cases. Samsung has shown, with their Galaxy S7 Active, that a phone can be made with a rugged, waterproof enclosure that’s only a few millimeters thicker. Speaking of Samsung, there’s even speculation that their problem with the Note 7 phone catching on fire was a result of trying to beat Apple in the thinness competition.

Reduced number of ports – With thinness comes the need to remove many of the legacy ports designed for thicker products. While leaving them out makes it possible to reduce thickness, it requires carrying more dongles to connect to our other devices.

Loss of features – iPhones still don’t have NFC and wireless charging, likely a result of insufficient space. Magsafe, one of the most innovative features ever created for notebook computers, has been eliminated to make the new MacBook notebooks thinner. With its removal is the loss of the battery charging indicator.

Typing errors – Thinness has led to notebook keyboards with reduced performance compared to the iconic keyboards used in products like the ThinkPad. Key travel has gone from 3 mm to under 1 mm, causing more errors.

What’s ironic is these consequences might have just resulted from an Apple executive saying. “I want our products to be as thin as they can be”, walking away, and then everyone taking the person literally. How likely is it that, when that request was made, anyone was thinking of any adverse outcomes? Well, perhaps a few engineers that were told not to be negative and be a team player.

The lesson is that an arbitrary goal for a product’s requirement can have far-reaching effects on the company’s products, as well as an entire industry, and few may be aware of that when it all began.

This is What You’re Missing about Vocal Computing

On Christmas morning, as my mom and I hurriedly rushed around my kitchen making final preparations, a third voice would occasionally interject into our conversations. Sitting at my counter was Alexa, helping me through the process by answering questions, setting timers and even flipping on holiday music at our request.

I’ve been living with Alexa for roughly two years now and have grown accustomed to our constant banter. But, for my mom, it’s still a very new and novel experience. When my mom speaks to Alexa, she might recognize she’s speaking to a computer, but she probably doesn’t consider that computer is actually thousands of miles away and she probably doesn’t realize the way we talked to that computer on Christmas morning is the new face of computing.

The graphical user interface (GUI) wasn’t new when it was introduced in 1981 by Xerox and popularized to the masses in 1984 by Apple’s Macintosh computer. A GUI didn’t represent a new technical way of computing but it was a crucial evolution in how we interact with computers. Think of the impact the GUI had on how we used computers and what we used computers for. Think of how it changed our conception of computing.

The smartphone was created in the 1990s but it wasn’t until 2007, with the advent of Apple’s iPhone, that smartphones reached an important inflection point in consumer adoption. Today, 75 percent of U.S. households own a smartphone, according to research from the Consumer Technology Association (CTA).

The touchscreen interface represented the next paradigm shift in computing, ushering in a new way of thinking about computing and bringing into existence new applications.

Smartphone computing shares an important heritage and legacy with the GUI introduced in the early 1980s. If you’re old enough to remember computing before GUIs, can you imagine computing on a smartphone using command prompts? GUIs in the era of desktops improved computing. It was the transformation to a graphic interface that ultimately launched the smartphone era of apps.

Vocal computing will do the same thing for the future of computing. Vocal computing isn’t perfect. Alexa isn’t always certain what I’m asking. Google Home doesn’t always provide an answer. Siri can’t always help my sons when they ask complex questions. Like a first date, we are still learning each other.

Software layers and form factors change our computing experience. We’ve seen this throughout the history of computing – from the earliest mainframes to the computers we call phones and carry in our pockets. In all the same ways, vocal computing is just an extension of what we already know – it’s a more natural and intuitive interface.

Let’s not overlook just how transformative this new interface can be. Imagine someday computing on our bikes, in our cars, while we are walking or lying in bed. With voice, every environment can be touched by computing.

“Works with Alexa” – Amazon’s Trojan Horse

There have been few products in my life that, upon their arrival, have fundamentally altered my work flow. I use the term work flow broadly because it isn’t necessarily focused on work, but more about my process of using technology in my life. The Amazon Echo has been one of those products.

I’ve been quite vocal of the idea of hands-free-computing. I first started thinking about this with the Apple Watch. The concept is fundamentally around how we can engage and interact with computers while having our hands free to go about doing what they are doing. Reflecting on the last few years of consumer behavior, we witnessed a period of humans staring at screens for many hours every day. I think the future will enable the same value, or more, we get out of our computers today but without the need to have our faces buried in screens all the time to do it.

What made the smartphone profound, from a usage standpoint, was how it enabled us to move freely in the world and still have a computer at our fingertips. I think the future will follow this path and allow us to still go about into the world and not have be have our faces buried in our screens. To do this, however, it will require a step function in user interface. This is where I think voice comes into play.

The building blocks are being laid today to get to that vision. But, as I made mention in my article about AirPods, companies who start to focus on the voice-only interface with computers will get a head start on the market, since it requires a fundamental rethink on how we interact with computers.

Let me be clear: voice-only and even voice-first paradigms are not going to be the primary computer interfaces for some time, if ever. Even if we look at science fiction, or something like Jarvis in the Iron Man movies, we see a mix of voice, visual displays, and hand gestures all coming together as computer interaction models. In many of those science fiction examples, it seems the user has the choice of one, or all, ways to interact. Voice, screen, or gesture can work independently or in cohort as an interface. This seems like the ultimate reality.

Echo as a Hub

This brings us to Amazon’s Alexa. A product I have a hunch is taking baby steps toward a trojan horse play by Amazon to become a more central personal assistant in the lives of many consumers. Just seeing the vast amount of support for Amazon’s Alexa in announcements coming from CES is quite telling. Vendors are racing to integrate Alexa or support it with a “works with Alexa” tagline in their specs. Regardless of the challenges that face Amazon, momentum is momentum and this momentum is hard to ignore.

I frimly believe Amazon is a more likely partner than Google for the vast majority of hardware companies when it comes to a smart assistant. Thousands of hardware companies are not going to make their own software/OS/personal assistant and would rather use whatever the dominant OS/assistant is for the vertical they are competing in. Right now, all signs point to Amazon (in markets where they compete) to have fundamental advantages to Google when it comes to integrating Alexa into their products.

First, Amazon offers AWS with all the backend tools needed to have AI, natural language processing, computer vision, machine learning, etc. Most of these partners of Amazon will host their services on AWS. Which means Amazon can give favorable pricing for hardware integration. Amazon also controls the fastest growing e-commerce marketplace in many markets thus making Amazon a much better distribution channel for all of these hardware products. Google has also burned many bridges thanks to their tactics with Android and, as a result, have left many of their current partners in search of alternate solutions. Ironically, this is why so many hardware companies left Microsoft and embraced Google. It seems you will never please pure play hardware companies when you are a provider of a third party OS/software solution.

The logical progression here is a build up of a third party ecosystem around the Echo/Alexa. Consumers will see the common thread of these gadgets supporting the Echo hub/Alexa which could prime them to believe this is the ecoystem to jump on. An interesting trend we see with the Echo is how, once a person gets one in their home and connects a product like lights, thermostats, door locks, etc., they start to quickly connect others. They will want to have as much choice and variety as possible to give them the most options that fit their life. So, the bigger the ecosystem, the better, when it comes to a smart control/appliance ecosystem.

Another way to think about this, it is like in the early days of the App Store. You would go into a Walgreens, or Safeway, or Macy’s, and see a sign saying – Download our App on the iOS or Google Play Store (wasn’t called Play store back then) – consumers would see a brand they know and trust has apps on these platforms and that broad support helped them lean one way or another toward an iPhone or Android phone. What they did not see was a sign for the Windows Phone app store, and thus Windows Phone fate was sealed. Similarly, a consumer walks into Best Buy to buy a refrigerator or washing machine. Maybe they walk into a Home Depot or Loews and sees smart sockets and light switches, and loads of other things from brands they trust that say “works with Alexa” they will recognize all these product options work with Alexa/Echo so why not just get an Echo Hub. My fear is what they don’t see is HomeKit everywhere for Apple’s sake.

That is just the first baby step in Amazon’s potential as a trojan horse. The key for Amazon is to start to create a new dependency with their customer. If the Echo and the Alexa ecosystem begin to garner trust and depenency of certain computing tasks with consumers, it could chip away at other dependencies created by Google and Apple with their customers.

As farfetched as this may seem, consider what would happen if, in a few years, smartphones are still a thing (as they very likely will be) and I have come to depend on Alexa for not just my home but search, music, weather, news, TV shows/entertainment, etc., who’s to say Amazon could not make a run at smartphones again and use the Alexa dependency to acquire a greater portion of their customer’s time via the pocket screen? It sounds farfetched but the opportunists know that when a new computer paradigm comes along, it is the best time to steal customers from those who dominated the previous paradigm.

But what interests me the most about this play by Amazon, more so than Google, is the elimination of the screen from the interface. I have argued that, if a screen or touch-based input is available, consumers will default to that input. Rather than build important new behaviors and work flows around voice-only/AI solutions, they will go back to their screen and just insert text, or set a timer, etc. Apple needs consumers to start depending on Siri and voice as a primary interface mechanism for AI but, as long as there is a screen involved, that won’t happen. Amazon, however, is building those new workflows and dependencies around Alexa and the voice-only UI and building steam with an ecosystem around it. This is where I think a trojan horse strategy is at play.

Amazon has massive challenges in front of them still to pull this off. In fact, every company going after the personal assistant of consumers has huge challenges. But for Amazon, they already have a great deal of trust and could become the primary interface for the smart home in Western markets. They have the business model in their favor to support making a run here. They do need to focus more on the security element, which will be critical. But like I said, momentum is momentum and momentum can’t be ignored.

Here is a running list CNET is tracking of all the things at CES being announced that integrate or work with Alexa.

With TV, Tech isn’t the Problem

Along with much of the Tech.pinions team, I’m at CES this week and, as usual, TVs and TV-connected devices are a big theme. Streaming video providers are also present and making announcements of various kinds. Yet I’m struck again this week, as I have been before, by the fact technology isn’t really the biggest challenge in disrupting the traditional TV and video industry. Yes, there are advances being made in technology which are improving the user experience of watching video, but it’s content rights that are still the biggest barrier to really giving consumers what they want.

Working with – and around – the current system

A lot of the TV-related technology on display at CES either works with or around the current system. An increasing number of connected TV devices are incorporating some kind of over-the-air element. I saw boxes and other hardware from Mohu, Sling, and others designed to capture OTA broadcast signals and incorporate them into a next-generation user interface. This is hardly dramatic new technology – broadcast has been around for decades – but it’s often still the easiest way for cord cutters to access sports and local content.

It’s ironic we’re falling back on older technology to supplant newer coax, fiber, and satellite-based delivery, but this is the state of the TV industry today. Some of the best options simply have to work with what they’ve got – that’s an admirable reality but it often means disjointed experiences which combine OTA signals with internet-delivered streams, multiple user interfaces, and local or cloud-based storage. The new devices on offer at CES attempt to bring some harmony to all this, including the Mohu and Sling hardware devices I mentioned. But, in many cases, these solutions merely cobble back together bundles that end up looking very similar to what they’re replacing. And of course, OTA solutions don’t work for some people at all (I have a big mountain sitting between my house and the local broadcasters, meaning I get no signal at all).

Rights remain the biggest barrier

I’ve been using AT&T’s DirecTV Now since it launched late last year and I’ve largely been enjoying it, though I’ve seen a few technical hiccups here and there. But there are several non-technical things that detract from the experience – TV Everywhere authentication as offered by traditional pay TV services is a bit lacking and commercial breaks often display a “commercial break in progress” placeholder rather than actual commercials. The latter doesn’t bother me overly much, but both of these are entirely down to rights issues. Contracts signed years ago haven’t yet been renewed, so AT&T doesn’t have the rights in some cases to do its own ad insertion or to authenticate users on this service for TV Everywhere apps.

Hulu made some news at CES because it has apparently signed CBS as part of its pay TV replacement service. The fact that a single broadcaster signing on is news is more evidence of how fragmented this whole space is and how important rights negotiations are. The reality is that, even if people balk at the high prices of traditional pay TV services, they still want a lot of the content and that means paying directly or indirectly to access it. CBS has its own digital streaming service – CBS All Access – and has been a holdout from several of the other streaming services including DirecTV Now. Hulu getting CBS on board is therefore something of a coup but we’ve yet to see what other content it has secured and what its reported $40 per month price will include.

The reality is that some of this is merely a matter of contract renegotiations and will get worked out in the coming years, while other elements are down to content owners deliberately resisting or blocking some of the changes to the traditional business models. The major traditional pay TV providers are part of this picture too, though of course Sling and DirecTV Now come from two of the biggest. Cable operators have been the slowest to embrace this change, largely because they dominate the historical market.

User interfaces and video quality can still help

Having said all that, we’re still seeing innovation around user interfaces and video quality and they are making a difference even as the rights issues get worked out. Some of the new streaming pay TV services have much better UIs than the services they’re replacing. Interactive programming guides still often make an appearance, but search, recommendations, and on-demand options make these interfaces more compelling. In addition, we’re seeing innovation around content formats like 4K and HDR, from both TV manufacturers and content providers. Here, too, the newer over-the-top services are taking the lead, with Netflix and Amazon offering some of the first mass market 4K content. But some of the pay TV providers are dabbling with 4K too, and even Samsung is now going to be selling 4K TV through its smart TVs.

A tipping point is coming

At some point in the next year or two, I predict we’ll see a tipping point when it will become apparent to everyone, including the current holdouts, that digital delivery is the future and that it’s coming far faster than many of them thought. We’re already seeing the mainstreaming of streamed pay TV services, with the DirecTV Now launch just the first of a new raft of services, to be joined by Hulu, Amazon, and YouTube in the near future. But we’re also seeing accelerated cord cutting (with the pay TV industry losing well over a million subscribers per year at this point) and many individual cable networks losing subscribers at a much faster rate due to skinnier bundles and rising rights costs. All of this, taken together, will cause a crisis in the TV industry which will finally drive it to embrace new business models and broader distribution. And then the rights side of the equation will finally catch up with the advances in TV technology.

The Real Timeline for Autonomous Self-Driving Vehicles

As I head to Las Vegas for CES, I am assured by pre-CES meetings that self-driving cars will be a hot topic at the show. Faraday Future is set to announce their new car to compete with Tesla and is to have a self-driving feature as part of the new vehicle. Chrysler, BMW, Honda, Hyundai, and Nissan will be announcing their self-driving cars.

But one of the big questions we are all asking is when will truly self-driving vehicles be in use? This is the question I asked Ford Executives when I spoke to them before CES.

Ford has become one of the most aggressive mainstream auto manufacturers and is very active in developing their own home grown self-driving car. I believe they also have what is probably the most realistic timeline for delivering their own autonomous vehicle and, more important to the overall discussion, what the timing will be over the next 5-10 years for the actual rollout of these “robot cars”.

Although we’ve seen Tesla add a self-driving feature to their cars and Google, Uber and others have mocked up self-driving cars that are being tested on roads already, we are not close to seeing truly self-driving automobiles on the road anytime soon. Although I am told that, from a technology standpoint, having a fully automated vehicle is possible as early as 2018. But as one can guess, the real roadblocks to getting self-driving cars on the road in this decade will be from federal, state and local regulatory agencies.

Ford told me they (and just about every other automaker along with players like Tesla, Uber, and Lyft) have been lobbying all of these agencies to help them create the types of laws that would allow self-driving vehicles to be on the road safely. But this is a time-intensive process and, given some of these regulations will also have to go through Congress, it will take awhile. Best guess by Ford is we could have the kind of rules in place for them and others to launch a set of fleet vehicles by 2021 or 2022.

Interestingly, Ford believes the first wave of self-driving cars will be fleets of these vehicles and will be dispatched on demand before they and others actually start selling autonomous vehicles directly to their customers. They see this as strategically important for the rollout of these cars as, during this period, it will give them a great deal of data needed before they actually sell cars to the public. That is why they made the big investment in Lyft although they could run their own fleet as well.

It appears this is the fundamental strategy of Uber and Lyft as well. They want to be the leaders in managing autonomous vehicle fleets and, while they would like to have their self-driving cars on the road sooner than 2020, the regulatory issues will most likely keep that from happening until at least 2020-2021.

This rollout strategy is both interesting and important since it starts to lay the groundwork for a completely new approach to our driving options and needs. The idea of having a fleet of vehicles at anyone’s disposal makes the issue of owning a vehicle a real question. In my case, it would be a cost of ownership vs combined cost of using an autonomous driving vehicle for all of my personal transportation needs.

I lease a car that costs me around $4500 a year plus gas and I drive about 13K miles a year. Once I know the cost per mile of using a fleet vehicle, I would be able to compare these costs and see if using a self-driving fleet vehicle makes sense. There will also be the response time to consider. For this to work for me, the car would have to get to me quickly.

Ford, Uber, Lyft, and others who will manage these autonomous vehicle fleets plan to have a large fleet lot in many areas of urban cities to make the response time as fast as possible.

While you may hear more aggressive timetables predicted than what Ford is suggesting, I suspect their forecast is pretty accurate. While I would really like to have either a fleet of vehicles at my disposal or own one myself, I just don’t see that happening until the beginning of the next decade.

What Apple’s Acquisitions in 2016 Tell Us about 2017 and Beyond

There is a lot of speculation about the “iPhone 8” and what Apple should be focusing on in 2017 in order to stay ahead of the game or, for some, barely keep up with competition. Despite some safe bets on the new iPhone features that can be extrapolated from supply-chain clues, guessing, even correctly, what Apple will do is almost as unlikely as winning the lottery. I thought, however, that looking at the 2016 acquisitions would give us more than a clue as to where Apple will focus in the future and I share my wish list of what I would like to see come out of Cupertino.

What Apple Acquired in 2016 (that we know of)

Emotient is a startup that uses artificial-intelligence technology to decipher people’s emotions by analyzing their facial expressions. The technology can be used for a number of things including detecting pain, reading reactions to content or situations we are exposed to – think advertising and retail. Emotient had been granted a patent for a method of collecting and labeling as many as 100,000 facial images a day that can be used to teach computers to better recognize facial expressions.

LearnSprout is a San Francisco-based startup focusing on tools that help teachers monitor students’ attendance, grades, and other school activities through easier access to school information systems. One of the purposes of collecting such information and making it available to teachers was to help identify at-risk students.

Flyby Media is a company that worked with Google on Project Tango. Flyby Media developed technology that allows mobile phones to see and scan, through the camera, the world around them. The company’s website also said they were developing the next generation of consumer mobile-social applications that connect the physical and the digital worlds.

LegbaCore is a firmware security company that specializes in “digital voodoo” or security at the deepest and darkest levels of computer systems. Apple was first exposed to them as they were battling Thunderstrike 2, the first super-worm to successfully attack Macs.

Carpool Karaoke is a popular show Apple licensed 16 episodes of and is to be produced (but not hosted) by James Corden as well as Ben Winston, the “Late Late Show’ executive producer. Tim Cook and Corden kicked off Apple’s September event with a special edition of Carpool Karaoke.

Turi is a machine learning and artificial intelligence startup focused on tools that help enterprises make better sense of data. Turi also enables developers to build apps with machine learning and artificial intelligence capabilities that automatically scale and tune.

Gliimpse is a Silicon Valley-based company that built a personal health data platform that enables any American to collect, personalize, and share a picture of their health data. The focus was particularly around cancer and diabetes patients.

Tuplejump is an Indian-based machine learning company specializing in software that processes and analyzes big sets of data quickly.

Indoor.io is a Finnish company focusing on indoor location and mapping.

Acquisitions Show Clear Areas of Focus but How It Will Materialize is Still Unclear

If you look at the list above, aside from the clear outlier of Carpool Karaoke, the focus for Apple seems centered around artificial intelligence, augmented reality, enterprise and education.

Artificial intelligence is probably the best example of how different the expectations vs. what Apple delivers might be. For many, artificial intelligence simply boils down to how smart Siri is. However, intelligence in devices is expressed in many different ways. Learning which color emoji is your preference, learning your most likely route at a given time of the day, understanding a reference to a time and a place in an email and setting up an appointment for you are all examples of how “intelligence” can be used to make our experiences better.

Machine learning and fast data processing are key to feeding the brain of any artificial intelligence. Analyzing millions of data points to discover patterns that can help predictability is very important in lowering response times and increase accuracy in our exchanges with an assistant like Siri. Being able to detect users emotions might play a role in that interaction. For the assistant to know if we are getting frustrated or anxious might help with our interaction in the same way it would between two humans.

Augmented reality is an area in which Tim Cook has expressed interest and excitement. Aside from gaming which, of course, is a big part of what iPhones are used for, there are commercial experiences that could benefit from an augmented reality, mixed reality, merged reality or whatever else you want to call this blend of real and digital worlds.

Enterprise is becoming more and more important for Apple and security plays a big role in selling devices to enterprise. iPhones and iPads continue to penetrate organizations, becoming more of a target for hackers. Apple needs to stay ahead of the game. While consumers might not always recognize how important security is, Apple has been very passionate about security for quite some time. As we use our devices, not just to store pictures and contacts info but payment information, health information, smart home connections, we want our devices, as well as our data, to not to be accessible to people with bad intentions.

In education, the battle to displace Chromebooks in K-12 will intensify in 2017, with Microsoft eyeing that segment as a growth opportunity for Windows. For Apple not to have iPads forced to compete on price alone but in adding value to their offering beyond devices is important. Looking at applications and tools to educate as well as manage students is certainly a way to do that.

My Wish List for 2017

Considering the areas I have discussed above, there are a few things I would like to see Apple focusing on in 2017.

A More Conversational Siri – I have mentioned before how my relationship with Siri has been improving over time. This is good and bad at the same time. Good because I appreciate it. Bad because I want more. As my dependence on Siri grows, first in my car and then everywhere through my AirPods, I want Siri to rely less on my iPhone screen and become more conversational. Apple understands that less time looking at my screen does not mean I will think any less of my iPhone but I realize that, for conversational AI, the progress will be slow.

More Tools for Education – Swift Playgrounds was a great example of how Apple could do more to future proof our kids with the kind of skills they will need when they grow up. AI is here to stay and, instead of worrying about the threat of job losses, we should be investing in preparing the next generation with the set of skills required to get a job. While this is a much bigger issue than any single company could solve, I think Apple is in a good place to get kids engaged at an early age, not just with coding and problem-solving skills, but also with fostering creativity, imagination, and innovation.

Better Collaboration Tools – Collaboration is broader than just working with someone else. While I would like to see Apple focus on better collaboration tools for work, it is at home I more urgently need help. If you have kids, you know running a home is as complex as running a company. School and after-school activities, and work all blend together to create a scheduling nightmare only resolved with great collaboration skills.

More than any other company, Apple owns households and I would like to see more apps and tools to help households come together; not just for scheduling but also for monitoring and sharing. Without wanting Apple to give me whatever the digital equivalent of my daughter’s journal key is, I want to make sure my daughter is safe when she is online. Of course, teaching her how to do so is the first thing but there are more steps Apple could take to provide increased safety without hindering the experience. I am hoping machine learning will help with creating a more proactive approach to online safety as whitelisting websites, which is currently what most solutions boil down to, does not make for a rich experience. Sharing not just content but access to our smart homes across devices and family members could also be improved. Helping make our home life easier will pay dividends, especially at a time when the fight to own our home is intensifying among digital assistants. While having an assistant that connects with many smart home devices is valuable, having one that does not let me forget to pick up my kid from karate is priceless.

The Devil Is in The Detail

As you can see, my list is not about iPhone features and sexy new technologies. It is about practical experiences that improve my everyday life, something Apple has done for a long time. Something, however, that is hard to see when you first buy a product and something that is hard to market at point of sale. The challenge for Apple will be to continue to stay focus on delivering better experiences, rather than getting distracted by proving they can innovate by delivering sexy gadgets.

Top 10 Tech Predictions for 2017

Predicting the future is more art than science, yet it’s always an interesting exercise to engage in as a new year comes upon us. So with the close of what was a difficult, though interesting year in the technology business, here’s a look at my predictions for the top 10 tech developments of 2017.

Prediction 1: Device Categories Start to Disappear

One of the key metrics for the relative health of the tech industry has always been the measurement of unit shipments and/or revenues for various categories of hardware-based tech devices. From PCs, tablets and smartphones, through smartwatches, smart TVs and head-mounted displays, there’s been a decades-long obsession with counting the numbers and drawing conclusions from how the results end up. The problem is, the lines between these categories have been getting murkier and more difficult to distinguish for years, making what once seemed like well-defined groupings become increasingly arbitrary.

In 2017, I expect the lines between product categories to become even blurrier. If, for example, vendors build hand-held devices running desktop operating systems that can also snap into or serve as the primary interface for a connected car and/or a smart home system, what would you call that and how would you count it? With increasing options for high-speed wireless connectivity to accessories and other computing devices, combined with OS-independent tech services, bots, and other new types of software interaction models, everything is changing.

Even what first appear as fairly traditional devices are going to start being used and thought of in very different ways. The net result is that the possibility for completely blowing up traditional categorizations will become real in the new year. Because of that, it’s going to be time to start having conversations on redefining how the industry thinks about measuring, sizing, and assessing its health moving forward.

Prediction 2: VR/AR Hardware Surpasses Wearables

Though it’s still early days for head-mounted virtual reality (VR) and augmented reality (AR) products, the interest and excitement about these types of devices is palpable. Yes, the technologies need to improve, prices need to decrease, and the range of software options needs to widen, but people who have had the opportunity to spend some time with a quality system from the likes of HTC, Oculus, or Sony are nearly universally convinced that they’ve witnessed and partaken in the future. From kids playing games to older adults exploring the globe, the range of experiences is growing, and the level of interest is starting to bubble up past enthusiasts into the mainstream.

Wearables, on the other hand, continue to face lackluster demand from most consumers, even after years of mainstream exposure. Sure, there are some bright spots and 2017 is bound to bring some interesting new wearable options, particularly around smart, connected earbuds (or “hearables” as some have dubbed them). Overall, though, the universal appeal for wearables just isn’t there. In fact, it increasingly looks like smartwatches and other widely hyped wearables are already on the decline.

As a result, I expect revenues for virtual reality and augmented reality-based hardware devices (and accessories) will surpass revenues for the wearables market in 2017. While a clear accounting is certainly challenging (see Prediction 1), we can expect about $4 billion worldwide for AR/VR hardware versus $3 billion for wearables. Because of lower prices per unit for fitness-focused wearables, the unit shipments for wearables will still be higher, but from a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.[pullquote]From a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.”[/pullquote]

Prediction 3: Mobile App Installs Will Decline as Tech Services Grow

The incredible growth enabler and platform driver that mobile applications have proven to be over most of the last decade makes it hard to imagine a time when they won’t be that relevant, but I believe 2017 will mark the beginning of that unfathomable era. The reasons are many: worldwide smartphone growth has stalled, app stores have become bloated and difficult to navigate, and, most importantly, the general excitement level about mobile applications has dropped to nearly zero. Study after study has shown that the vast majority of apps that get downloaded rarely, if ever, get used, and most people consistently rely on a tiny handful of apps.

Against that depressing backdrop, let’s also not forget that the platform wars are over and lots of people won, which means, really, that nobody won. It’s much more important for companies who previously focused on applications to offer a service that can be used across multiple platforms and multiple devices. Sure, they may still make applications, but those applications are just front-ends and entry points for the real focus of their business: a cloud-based service.

Popular subscription-based tech services such as Netflix and Spotify are certainly both great example and beneficiaries of this kind of move, but I expect to see many different flavors of services grow stronger in 2017. From new types of bot-based software to “invisible” voice-driven interaction models, the types services that we spend a lot of our 2017 computing time on will be much different than in the mobile apps era.

Prediction 4: Autonomous Drive Slows, But Assisted Driving Soars

There’s no question that autonomous driving is going to be a critical trend for tech industry and automotive players in 2017, but as the reality of the technical, regulatory, and standards-based challenges of creating truly autonomous cars becomes more obvious in the new year, there’s also no question that timelines for these kinds of automobiles will be extended in 2017. Already, some of the early predictions for the end of the decade or 2020 have been moved into 2021, and I predict we’ll see several more of these delays in the new year.

This doesn’t mean a lot of companies—both mainstream and startup—won’t be working on getting these cars out sooner. They certainly will, and we should hear an avalanche of new announcements in the autonomous driving field throughout the year from component makers, Tier 1 suppliers, traditional tech companies, auto makers and more. Still, this is very hard stuff (both technically and legally) and technology that potentially places people’s lives at stake is a lot different than what’s required to generate a new gadget. It cannot, nor should it be, released at the same pace that we’ve come to expect from other consumer devices. If, God forbid, we see some additional fatalities in the new year that stem from faulty autonomous driving features, the delays in deployment could get much worse, especially if they happen via a ridesharing service or other situation where ultimate liability isn’t very clear.

In spite of these concerns, however, I am convinced that we will see some critical new advancements in the slightly less sexy, but still incredibly important field of assisted driving technologies. Automatic breaking, car-assisted crash avoidance and other practical assisted driving benefits that can leverage the same kind of hardware and artificial intelligence (AI)-based software that’s being touted for fully autonomous driving will likely have a much more realistic impact in 2017. Truth be told, findings from a TECHnalysis Research study show that most consumers are more interested in these incremental enhancements anyway, so this could (and should) be a case where the current technologies actually match the market’s real needs.

Prediction 5: Smart Home Products Consolidate

Most of the early discussions around the smart home market has been for standalone products, designed to do a specific function and meant to be installed by the homeowner or tenant. The Nest thermostat, August smart lock, and various security camera systems are classic examples of this. Individually, many of these products work just fine, but as interested consumers start to piece together different elements into a more complete smart home system, problems quickly become apparent. The bewildering array of different technical standards, platforms, connectivity requirements and more often turn what should be a fun, productive experience into a nightmare. Unfortunately, the issue shows few signs of getting better for most people (though Prediction 6 offers one potential solution.)

Despite these concerns, there is growing interest in several areas related to smart homes including distributed audio systems (a la Sonos), WiFi extenders and other mesh networking products, and smart speakers, such as Amazon’s Echo. Again, connecting all these products can be an issue, but so are more basic concerns such as physical space, additional power adapters/outlets, and all the other aspects of owning lots of individual devices.

Because of these issues, I predict we’ll start to see new “converged” versions of these products that combine a lot of functionality in 2017. Imagine a device, for example, that is a high-quality connected audio speaker, WiFi extender and smart speaker all in one. Not only will these ease the setup and reduce the physical requirements of multiple smart home products, they should provide the kind of additional capabilities that the smart home category needs to start appealing to a wider audience.

Another possibility (and something that’s likely to occur simultaneously anyway), is that the DIY market for smart home products stalls out and any potential growth gets shifted over to service providers like AT&T, Comcast, Vivint and others who offer completely integrated smart home systems. Not only do these services now incorporate several of the most popular individual smart home items, they’ve been tested to work together and give consumers a single place to go for support.

Prediction 6: Amazon Echo Becomes De Facto Gateway for Smart Homes

As mentioned in Prediction 5, one of the biggest challenges facing the smart home market is the incredibly confusing set of different standards, platforms, and protocols that need to be dealt with in order to make multiple smart home products work together. Since it’s extremely unlikely that any of these battles will be resolved by companies giving up on their own efforts and working with others (as logical and user-friendly as that would be), the only realistic scenario is if one device becomes a de facto standard.

As luck would have it, the Amazon Echo seems to have earned itself that de facto linchpin role in the modern smart home. Though the Echo and its siblings are expected to see a great deal of competition in 2017, the device’s overall capabilities, in conjunction with the open-ended Skills platform that Amazon created for it, are proving a winning combination. Most importantly, the Echo’s Smart Home Skill API is becoming the center point through which many other smart home devices can work together. In essence, this is turning the Echo into the key gateway device in the home, allowing it to essentially “translate” between devices that might not otherwise be able to easily work together.

While other devices and dedicated gateways have tried to offer these capabilities, the ongoing success and interest in the Echo (and any ensuing variants) will likely make it the critical component in smart homes for 2017.[pullquote]The Amazon Echo’s Skills platform is becoming the center point through which other smart home devices can work together.”[/pullquote]

Prediction 7: Large Scale IoT Projects Slow, But Small Projects Explode

The Internet of Things (IoT) is all the buzz in large businesses today, with lots of companies spending a great deal of time and money to try to cash in on the hot new trend. As a number of companies have started to discover, however, the reality of IoT isn’t nearly as glamorous as the hype. Not only do many IoT projects require bringing together disparate parts of an organization that don’t always like, or trust, each other (notably, IT and operations), but measuring the “success” of these projects can be even harder than the project itself.

On top of that, many IoT projects are seen as a critical part of larger business transformations, a designation that nearly guarantees their failure. Even if they aren’t part of a major transformation, they still face the difficulty of making sense of the enormous amount of data that instrumenting the physical world (a fancy way of saying collecting lots of sensor data) entails. They may generate big data, but that certainly doesn’t always translate to big value. Even though analytics tools are improving, sometimes it’s just the simple findings that make the biggest difference.

For this reason, the potential for IoT amongst small or even tiny businesses is even larger. While data scientists may be required for big projects at big companies, just a little common sense in conjunction with only a few of the right data points can make an enormous difference with these small companies. Given this opportunity, I expect a wide range of simple IoT solutions focused on traditional business like agriculture and small-scale manufacturing to make a big impact in 2017.

Prediction 8: AI-Based Bots Move to the Mainstream

It’s certainly easy to predict that Artificial Intelligence (AI) and Deep Learning will have a major impact on the tech market in 2017, but it’s not necessarily easy to know exactly where the biggest benefits from these technologies will occur. The clear early leaders are applications involving image recognition and processing (often called machine vision), which includes everything from populating names onto photos posted to social media, to assisted and autonomous driving features in connected cars.

Another area of major development is with natural language processing, which is used to analyze audio and recognize and respond to spoken words. Exciting, practical applications of deep learning applied to audio and language include automated, real-time translation services which can allow people who speak different languages to communicate with each other using their own, familiar native tongue.

Natural language processing algorithms are also essential elements for chatbots and other types of automated assistance systems that are bound to get significantly more popular in 2017, particularly in the US (which is a bit behind China in this area). From customer assistance and technical support agents, through more intelligent personal assistants that move with you from device to device, expect to have a lot more interactions with AI-driven bots in 2017.

Prediction 9: Non-Gaming Applications for AR and VR Grow Faster than Gaming

Though much of the early attention in the AR/VR market has rightfully been focused on gaming, one of the main reasons I expect to see a healthy AR/VR hardware environment in the new year is because of the non-gaming applications I believe will be released in 2017. The Google Earth experience for the HTC Vive gave us an early inkling of the possibilities, but it’s clear that educational, training, travel and experiential applications for these devices offer potential for widespread appeal beyond the strong, but still limited, hard-core gaming market.

Development tools for non-gaming AR and VR applications are still in their infancy, so this prediction might take two years to completely play itself out. However, I’m convinced that just as gaming plays a critical but not overwhelming role in the usage of smartphones, PCs and other computing devices, so too will it play an important but not primary role for AR and VR devices. Also, in the near term, the non-gaming portion of AR and VR applications is quite small, so from a growth perspective, it should be relatively easy for these types of both consumer and business-focused applications to grow at a faster pace than gaming apps this year.

Prediction 10: Tech Firms Place More Emphasis on Non-Tech Fields

While many in the tech industry have great trepidation about working under a Trump administration for the next several years, the incoming president’s impact could lead to some surprisingly different means of thinking and focus in the tech industry. Most importantly, if the early chatter about improvements to infrastructure and enhancements to average citizen’s day-to-day lives come to pass, I predict we will see more tech companies making focused efforts on applying their technologies to non-tech fields, including agriculture, fishing, construction, manufacturing, and many more.

While the projects may not be as big, as sexy or as exciting as building the coolest new gadgets, the collective potential benefits could prove to be much greater over time. Whether it’s through simple IoT-based initiatives or other kinds of clever applications of existing or new technologies, the opportunity for the tech industry to help drive the greater good is very real. It’s also something I hope they take seriously. Practical technologies that could improve the crop yields by only a few percent of not just a few of the richest farms, but of all the smallest farms in the US, for example, could have an enormously positive impact on the US economy, as well as the general population’s view of the tech industry.

Some of these types of efforts are already underway with smaller agro tech firms, but I expect more partnerships or endeavors from bigger firms in 2017.

2017 Predictions

A couple of weeks ago, I reviewed my 2016 predictions with a view to seeing what I got right and wrong, and why. This week, it’s time for a new set of predictions for 2017. I’m going to do it mostly on a per-company basis and I’m also going to include one big question for each of these companies.

Alphabet

At Alphabet this year, I see the belt-tightening trend of the past year continuing with more fallout for the Other Bets. Specifically, I think we might see Nest integrated more tightly into Google’s new brand hardware initiative under Rick Osterloh. To the extent Google is pushing its own brand deeper into hardware, it makes sense to have Nest be a part of that, so I wouldn’t be surprised if we see some slightly different branding at the very least. I would also expect Google to announce a Pixel 2 in the fall and probably also new entries in other hardware categories like smartwatches (where Android Wear appears to be flailing), tablets, and laptops. As a result of all of this, I suspect Google’s relationship with all its OEMs, especially Samsung, will worsen in 2017. I would also expect Google to pay a big fine to the EU at some point to settle and move on from its competition case there. We’ll likely see YouTube launch its TV service this year into an increasingly crowded market that already features Sling, Sony, AT&T, and soon Amazon and Hulu.

Big question: will the Google Fiber division be shut down or spun off entirely?

Amazon

I would absolutely expect Amazon to keep growing like gangbusters on the e-commerce side – its ability to not just maintain but accelerate growth has been the big upside surprise of the last couple of years, driven in part by the increasing contribution of third-party sellers who now make up over half of its unit sales. I’d also expect the AWS business to continue to generate growing revenues and profits, though it will likely come under increasing pressure from Microsoft’s cloud efforts. The Echo devices will continue to sell well, but Amazon’s biggest challenge around Alexa is to get it into more devices that leave the home because a virtual assistant can only be really useful if it’s always with you. We may see an Alexa app for iPhone or Android (it would do better on the latter, where it could be made the default) but, ultimately, Amazon probably needs to launch its own phones (again) to really make this work. An Alexa-centric phone would be a lot more successful than Amazon’s last shopping-centric effort.

Big question: can Amazon replicate its success of a handful of major markets in others?

Apple

We’re already seeing the usual reports of supply chain cuts with regard to the new iPhones but I suspect we’ll see year on year growth over at least the first couple of quarters, if not the full year. In the latter part of the year, much depends on what Apple actually launches – if it’s the big bang, tenth anniversary release we’ve been led to expect, I suspect we could see another super-cycle of sales (though, of course, the downside may well be another year of declines from late 2018 onwards). If it’s more of an incremental release, then I suspect we’ll see a dip again.

Regardless, Apple now has a massive installed base of devices which will upgrade with some frequency and will, therefore, drive a large number of sales and revenues from Services, notably the App Store, and increasingly from subscription content like Music. I think Apple finally needs to put out a subscription video service in 2017 but it seems to have backed off from that goal recently. I suspect iPad shipments will start to flatten out, while Mac sales should bounce back a little from a down year. Apple Watch sales should be healthy but not much above this year’s.

Big question: can Apple drive overall revenue growth in 2017?

Facebook

Facebook has already said it expects revenue growth to slow a little in 2017 as ad load saturates and increasing ad load stops being a driver of revenue. But given the combined effects of a growing user base, rising ad prices, the growth of Instagram, and other drivers, I suspect we’ll still see very healthy ad revenue growth from Facebook. It’s less clear we’ll see growth in the rest of the business, which continues to be a tiny fraction of the total. Though Facebook has made some progress in new flavors of payments as well as e-commerce and Oculus hardware, the company’s own guidance suggests these won’t drive meaningful revenue in the short term. Increasing monetization of WhatsApp and Messenger will provide another boost to ad revenue in 2017, though likely modest as Facebook protects the core user experience in these much more intimate settings. I suspect Facebook will continue to face challenges with its internet access efforts, from Free Basics to drones, and may well start to back off on some of these in 2017.

Big question: will we see any products from Facebook’s new hardware group?

Microsoft

In hindsight, 2016 was something of a comeback year for Microsoft, with revenue growth starting to turn around, several well-reviewed consumer products across hardware and software, and accelerating momentum in cloud. But there are still big challenges – it’s increasingly clear Windows 10 adoption is going more slowly than the company forecast and it’s had to abandon smartphones entirely. Gaming is the one bright spot in terms of generating meaningful consumer revenue, while search is a useful secondary consumer revenue source. But beyond that Microsoft’s consumer strategy is still patchy and mostly revolves around free apps and services. Its strategy for competing with the Amazon Echo and Google Home seems to be focused on enabling OEMs. That could lead to some interesting new devices and probably plays to Microsoft’s strengths but we could see a situation in which Microsoft still has to make its own hardware to show OEMs the way.

Big question: do we see a Surface-branded smartphone reboot in 2017?

Samsung

Samsung’s 2016 was two very different things: up until about September, it looked like a great year, with the new Galaxy S products and the Note7 favorably reviewed and selling well. Then the Note7 fires began and things went pretty rapidly downhill. As we head into CES this week, we still have no official explanation from Samsung for the fires and US wireless carriers are pushing updates to brick the remaining Note7 devices in the wild. As such, this cloud will hang over any announcements Samsung makes at CES and throughout the first part of 2017. Whether that continues through the rest of 2017 depends on two things: a clear statement from Samsung of the cause of the fires and what it will do to prevent similar problems in future devices, and a big launch in the first half of the year that wows consumers and doesn’t suffer from any battery issues. Beyond that, Samsung will complete its Harman acquisition, which looks really smart (though we likely won’t see many synergies until next year) and the semiconductor business should continue to be a useful secondary driver of revenue and profit.

Big question: do we see a big shift in smartphone strategy from Samsung in 2017 in response to increasingly challenging market conditions?

Snap

It seems increasingly likely we’ll see an IPO from Snap (formerly Snapchat) in 2017. So much of what the company will do in the coming months will be geared towards driving its valuation up. That means more emphasis on becoming a serious alternative to Google and Facebook for advertisers and moving beyond its current novelty/experimental status. We’ll probably see more big content deals designed to increase video engagement and, therefore, ad revenues and more efforts to track ad effectiveness, providing analytics tools similar to what other platforms offer. We’ll also likely continue to see the same breakneck pace of feature rollouts which has characterized Snapchat from the beginning. And I wouldn’t be surprised if we see version two of Spectacles, perhaps with some rudimentary AR capabilities.

Big question: is Snap able to convince investors it’s another Facebook, not another Twitter?

Twitter

Speaking of Twitter, this is a make-or-break year for both the company and CEO Jack Dorsey. 2016 was the year in which the company tried and largely failed to grow its monthly active user base and I suspect we’ll see very modest growth in 2017, too. The fact the company still hasn’t begun providing daily active user numbers suggests they aren’t pretty and engagement of a large audience remains a key challenge. Jack Dorsey seems to simultaneously want to over-control the product (driving out several executives in the process) while not really having enough time to do it properly, given he’s also CEO of Square. All this will come to a head in 2017 and either we’ll see an unexpected turnaround or Dorsey will be forced out in 2017.

Big question: does Twitter finally address its abuse problem in a satisfactory way in 2017?

Rapidly Diffusing Technology

With the mass proliferation of smartphones and a rapidly maturing consumer base making their smartphones their primary computing devices, there is a fascinating new dynamic emerging. Our company, Creative Strategies, spent a lot of time in the late 80s modeling adoption cycles of new technologies. Using the market data in hand, along with discussions with academics and people like Geoffrey Moore who published a seminal book on the subject called Crossing the Chasm, models were built to understand the conditions that drove technologies into the mainstream. In those days, decades were used in models to predict how technologies went from the early innovators into the hands of average consumers.

Then, much of the effort was built around adoption cycles for personal computers that fit on desks and, eventually, ones that fit on your lap. It took decades just to get to the point where it was common for a single household to have a personal computer in their house. Now, around 3 billion humans across the globe have a computer that fits in their hands and goes with them everywhere. We are in uncharted territory and at a unique point in technology history with this many people owning such a powerful pocket computer, continually connected to the internet and other people, with instant access to buying and selling, and a wealth of information and data accessible at all times. Put all this together and we are watching technology diffuse and become adopted into the mass mainstream at unprecedented rates. This is happening with both hardware and software.

I first started thinking about this trend several years after Apple released the iPad. As that product launched, we were still using traditional models to predict and anticipate adoption cycles of new technology. We shortened the time span some, due to more mature market dynamics, but we did not expect the iPad to become the fastest adopted new technology product of all time and forecasts were well under what iPads sales rates were. So, we focused our research and analysis on why this happened and what we can learn. This was the first time we needed to step back and honestly look at how much has changed in the market to rethink how technology will diffuse in the modern age.

The iPad was a useful case study in adoption cycles for more than just how quickly it went mainstream but also how quickly it seemed to hit its addressable market. Once the iPad’s S-curve was on a steep incline, forecasters began to modify their underestimated forecasts but then began to overestimate. Some people were predicting a potential market size north of one billion tablets. Sales started to slow and it quickly became a replacement market with a total active installed base of around 350-400 million units, well short of the billion plus forecasts and well short of the PC installed base of ~1.5 billion and smartphone installed base of ~3 billion. While the iPad went mainstream faster than any tech hardware product in history, it also hit its max total addressable market extremely quickly. This is the new dynamic I think we are to expect as we move into a cycle of rapidly diffusing technology.

We can make some of the same observations about the wearable market. Specifically, things like smart watches and fitness wearables. This market shot off like a rocket but then slowed very quickly. This dynamic made predicting its exact market size difficult. The slowing year over year growth of the category came quickly and, while the market size for wearables may still be larger than it appears it is today, it also diffused quickly in certain developed markets.

We also see this trend in software. Perhaps one of recent example was the fascinating phenomena of Pokemon Go. There has never been software which went from zero to 500 million as fast as Pokemon Go. While not all of those 500 million people are still using the app, we saw a fascinating phenomenon of diffusion of software in the form of hoards of people walking around public spaces hunting for digital creatures hiding in the physical world.

The groundwork has been laid for technology, both hardware and software, to diffuse rapidly in short periods of time. Which again, makes it very difficult to predict their market size. A product, app, or service may appear to be addressing a larger market than it is in its early stages. This means metrics around hardware sales, app downloads, service subscriptions, etc., may be extremely misleading. The problem is, we have no idea the degree to which they are misleading or not.

Twitter, Fitbit, and GoPro are just recent examples of things that grew quick but also hit their max market opportunity just as quickly. As these companies went public, it was on the basis of a much larger market opportunity than it appears is the case. It’s possible Snapchat may fall into this category, but we don’t know, which is the new challenge of our modern era.

One last point. There are clearly things which will not diffuse as quickly because they are truly new and groundbreaking types of technologies (AR and VR for example) and may spread more slowly since consumers have less familiarity or context with the new technology. In those cases, I believe, we can still assume some type of longer than usual adoption metrics. I’m also not saying the mentioned companies or categories can not still grow their market opportunity with innovations. Only that the “easy” growth was over and over quicker than anticipated.

All in all, I’m convinced those of us who study these markets are in for new challenges in our approach as we try to size market potential for consumer technology. It means we need to address research with new methodologies, ask different sets of questions, understand deeper nuances of each consumer segment and, overall, be willing to abandon old practices and assumptions to create new ones for the modern era.