Are Glasses Needed for AR and Mixed Reality to take off?

Three very exciting new technologies on the horizon have much of the tech world buzzing about, and billions of dollars are being invested into this new area of tech.

The first technology that hit the scene about two years ago was VR or Virtual Reality when Oculus Rift introduced their VR Glasses at CES in 2015. This product was the big hit of that CES, and shortly after, Facebook bought Oculus For $2 Billion.

Shortly after, after that Microsoft introduced their HoloLen’s project. It was entirely different from Oculus Rift’s VR version in which you are enclosed in virtual worlds; whereas Microsoft’s glasses or goggles allowed you to see through the lenses and superimpose virtual images and objects onto any scene and called it Augmented Reality or AR.
Since then another termor third technology has come into the tech lexicon called Mixed-Reality that tries to bridge the gap between VR and AR and tie the VR and AR worlds together.

Most people got a glimpse of AR, with its virtual objects displayed on a smartphone, when Niantic introduced their Pokemon Go game to the world. This game allowed people to place virtual objects on any object or scene they are viewing as part of the game and made AR a household name.

Since then Apple has created a robust AR platform through its AR kit, and hundreds of AR apps are already available on iPhones and iPads. Google has also jumped into the AR game with ARCore, their AR developer tools for Android that lets Android developers create AR apps for the Android smartphone platform. But in both of their cases, these AR apps are all delivered to a smartphone or tablet. The big question in Silicon Valley these days is whether AR will ever gain a broad audience if it is only used on a smartphone, or will select AR or mixed reality glasses be a more natural way for people to view and interact with AR and mixed reality applications in the future?

At the recent Wall Street Journal Conference, John Hanke, chief executive of Niantic Inc. discussed the success of their Pokemon Go AR app and made a crucial prediction.

Speaking about glasses:

“He said he thought it would take “probably in the order of five years” before the technology is mainstream. Augmented reality technology debuted on the smartphone, Mr. Hanke said, “because you build it for the platform that exists.” AR will reach “full fruition when we get to the glasses,” Mr. Hanke said. With glasses, the potential for AR “is immense because it can be woven into your daily life.”

Over my 35 years in Silicon Valley I have learned that when pioneers of a technology weigh in on a subject, they are involved with, it is best to listen to what they say. Mr. Hanke is a pioneer in AR, and since millions of people have played Pokemon Go, he has the kind of knowledge and experience to predict where AR is headed. As he states in the WSJ article, he created Pokemon Go for the platform that was already there, in this case, the smartphone. But he does not believe AR or mixed reality will reach its real potential without some AR or mixed reality glasses or goggles.

On the other hand, Apple’s Tim Cook is over the moon with AR for the iPhone. In multiple interviews, he has stated his excitement for AR and believes AR is a game changer for the iPhone and has committed to working closely with developers to create the most innovative AR apps possible using AR Kit for IOS.

Google seems to be equally excited about AR on Android smartphones although they have not been as vocal as Tim Cook has been about AR on the iPhone. The good news is that AR on a smartphone or tablet will become an essential step in getting people very familiar with the concept of AR and mixed reality and I believe it will play a prominent role in making AR glasses or goggles more acceptable once they do hit the market.

If Apple or Google had tried to push AR or mixed reality into the mainstream via glasses today, they would be a flop. Just look at the disaster Google had with their Google Glasses project a few years back, and you can see why glasses even today would be a hard sell. Getting people used to AR apps on smartphones and tablets will start the ball rolling. Once the technology is ready to create AR glasses that would work and be stylish and easily integrated into our daily lifestyles in 4 or 5 years as Mr. Hanke predicts, then glasses become the preferred way to work with and interact with AR or mixed reality apps in the future.

For their part, Apple, Google, Microsoft and many others are doing much R &D around AR, and mixed reality glasses that would be acceptable to mainstream users and all have filed multiple patents on various glasses designs already. But as Mr. Hanke of Niantic says, it could be at least another five years before the technology is here to make the kind of glasses that will bring AR to the masses in a more personal and interactive way.

I am excited about AR on smartphones but agree wholeheartedly with Mr. Hanke of Niantic that it will take some AR glasses or goggles to fulfill the promise of AR and mixed reality for the mass market. In the meantime, we should get some stunning AR apps for use on smartphones and tablets, but keep in mind that these are essential stepping stones that will eventually need AR glasses for AR and mixed reality to ever reach its full potential.

Solving Multi-Device Dilemmas

Ever since the proliferation of different individual computing devices has occurred, people have been faced with a frustrating dilemma. How do you get your devices to work better together?

Yes, it’s great that we all now have a range of impressively powerful and capable devices that let us do almost anything, anywhere. The fact that we have computers in our pockets that are now more capable than room-sized supercomputers of a few decades ago is clearly a wonderful thing. And today’s super-slim, lightweight notebooks are a godsend for those who suffered through generations of “luggables.”

Ironically, though, the more capable the individual devices become, the more frustrating are the challenges that come with not having them work together more effectively. In the past, all the serious work only happened on PCs, so that was the only logical choice for many tasks. Similarly, large capacity storage was also only available on PCs, meaning they were the only place you needed to go to look for whatever files you desired.

Now, of course, high-capacity storage exists on everything from smartphones, to tablets, to PCs, to fingernail-sized storage cards, and the unlimited capacity of cloud-based storage services means it’s getting harder and harder to find the files, images or other data that we need. Plus, the amazing compute resources and connectivity options available on everything from the smallest wearables on up means it’s possible to do complex tasks across a huge range of computing devices.

The net result is a confusing mix of devices, platforms, services, and communications options that makes it increasingly difficult to maintain an organized digital life.

Several companies have made efforts to overcome these challenges, but most are intentionally limited to their own operating systems or other environments. Apple, for example, has had the ability to receive certain types of notifications that originate on iPhones onto Mac screens since the introduction of Continuity features in Mac OS X Yosemite edition, back in 2014.

Even having simple connections between multiple devices doesn’t always help, though. In fact, sometimes it gets downright annoying. Yes, I appreciate that a phone call to my iPhone will also appear on the screen of a Mac that I may be simultaneously using, but more often than not, I’m still going to answer on the phone. Plus, I don’t really appreciate every iOS device in sight starting to ring. Now, responding to a text or instant message is certainly easier with the full-sized keyboard of the connected Mac than tapping on an iPhone screen, but the fact that I (and the majority of other iPhone owners) are usually using a Windows PC along with an iPhone means this trick doesn’t do much good.

Microsoft is also attempting to address these multi-device issues. In the new Windows 10 Creators’ Edition update, the company has introduced a feature called Continue on PC that lets you move your browsing sessions from your smartphone to your PC. The setup process is a bit lengthy and it does require you to install an app on either your iOS or Android-based phone, but it’s a step in the right direction. A number of third-party vendors are also working on similar solutions, but the seamlessness of the experience and their overall effectiveness are still unknown.

With the increasing number of smart connected devices in our homes, the longer-term vision for these multi-device scenarios needs to expand as well. Samsung presented an intriguing vision of this concept at their recent developer’s conference in San Francisco, describing the ability to move certain tasks, such as automatically transferring over your exact location in reading a Facebook timeline from a Galaxy smartphone to a Samsung Smart TV. The devil is in the details for these kinds of applications, however, and while the concept sounds great, the execution of the idea remains to be seen. Plus, there is the concern that, like Apple, Samsung will limit these multi-device scenarios to its own branded products—something that would dramatically reduce its potential impact.

The process of moving from an individual device-focused world to one where all of our devices—regardless of brand or platform—can function together seamlessly is bound to be a long one. Overcoming the challenges necessary to make these multi-platform jumps isn’t easy and brand-centric thinking doesn’t help. Plus, doing these types of turnovers effectively is going to require a lot more intelligence about how, where, and for what applications we use our various devices. Most people do things a bit differently, so automatically customizing for individual habits is going to be essential for long-term success.

Despite these challenges, however, there’s no question that we need to evolve our view and usage of multiple device scenarios into easy-to-use, easy-to-traverse everyday experiences.

Analyzing Apple’s Retail Growth

This past week saw the opening of Apple’s newest flagship retail store, this one on Chicago’s Michigan Avenue. The new store is enormous and strikingly designed, and is part of the company’s new model for roughly one-fifth of its nearly 500 stores. It also reflects the shift Apple is trying to make in how people perceive its stores, from which it’s now dropped the “Store” moniker and which it would like customers to think of as “town squares” or gathering places as much as retail experiences. Today, though, I’d like to take a step back from Apple’s reinvention of its stores and look instead at the growth of its store footprint over the last few years, and what it tells us about the role of retail and Apple’s regional focus.

Approaching 500

Any day now, Apple should open its 500th store, a milestone it’s been approaching for some time now. The chart below shows how that number has grown over the past eight years, from 273 stores in September 2009 to 499 at the end of September 2017:

As you can see, the growth rate has sped up and slowed at various points during that period, and in the last year in particular seems to have slowed overall, as the company focused instead on what the new experience should look like and revamped a number of existing stores.

Regional Distribution Favors the US and Europe

Those stores, though, are far from evenly distributed across the globe, with the US still very over-represented and other regions under-represented. The pair of charts below shows the mismatch between the contribution of each region to Apple’s revenues and the mix of stores in those markets. The first chart shows the percentage share of revenues and stores, while the second shows the ratio between the two:

As you can see, the Americas as a region has a share of retail stores which vastly outweighs its share of revenues – 61% of stores, but only 42% of revenues. At the opposite end of the scale is Japan, whose 8 stores (2% of the total) belie its 8% contribution to revenues. Europe is very close to parity between the two measures, while Greater China still has a 2x mismatch between its revenue share and its share of stores, and store presence in the rest of Asia-Pacific outside of Greater China also lags its revenue share.

Growth Over the Last Three Years Favors China, the US, and AP

It’s interesting, then, to look at where Apple has opened new stores over the last few years, with the country breakdown shown in the chart below:

As you can see, a single country stands out starkly here: China. It has seen 31 new stores since May 2014, or over 40% of all the 75 new stores opened during that period. Together with the US, where 16 new stores were opened during that time, it accounts for 63% of total stores opened. Many countries saw either a single net new store or none at all during that period, including Canada. Of the other countries where more than one new store was opened, three are in Europe (France, Italy, and Germany) and two are in AP (Hong Kong and Australia), with one in the Middle East, also reported as part of Apple’s Europe region.

Many New Countries in the Last Few Years

Apple currently has retail stores in 23 countries, with ten of those added since 2011. Of those, three have been in Asia Pac, and all but three have been in regions where Apple has had little penetration previously, including Latin America and the Middle East.

So the story of expansion over the last few years has three parts:

  • Continuing to expand in markets where Apple’s retail presence has been strong, notably the US and to a lesser extent Europe
  • Expanding massively in China, a region that’s suddenly become very important to Apple, and which now has more Apple Stores than any country after the US, passing the UK in the past year
  • Continuing to add a presence in new countries, by way of testing the water – Apple has ten countries with three or fewer stores, and five with just a single store.

Overall, that expansion has slowed a little over the past year as Apple has launched its new strategy and store concepts, but I would guess that as the company returns to growth and the strategy is implemented in a core set of stores, we’ll see it speed up that expansion and continue to focus on those three main sets of markets in much the same way.

Facing Up To Face Id

fI want to get my two cents in on the iPhone X’s Face ID feature so I can jump up and down for the rest of eternity shouting: “I told you so, I told you so!”

UNSEEMLY ANGST

The angst being spewed over the Face ID feature is as stupid as stupid gets. Why? Not so much because it’s stupid — which it is — but because no matter how many times pundits and observers make this same mistake, they proudly make it all over again every time Apple introduces a new, revolutionary feature to one of their products.

It is not a good idea to have a strong opinion on an new tech experience that you have not experienced. ~ Benedict Evans, @BenedictEvans 10/18/2017

Rather than growing ever more humble with each repeated failure, pundits simply double-down, growing ever more strident with each iteration of the same oft-told argument.

There have been a seemingly infinite number of stupid article’s written about Face ID, but for the sake of simplicity — and my sanity — I’ll focus mostly on one article written by Ewan Spence for Forbes entitled: New iPhone X Surprises Reveal Apple’s Flawed Vision. (All quotes are from Spence unless otherwise indicated.) Now I don’t always agree with Spence, but I respect his opinion. So it’s all the more unsettling that he — like so many others — has decided to pre-judge Face ID, and for all the wrong reasons.

DESIGN

(H)as Apple placed too much focus on design and not enough on the consumer?

No.

Oh! I’m sorry. Was that a rhetorical question? Then, by all means, let’s proceed.

PREJUDGING IS GOOD FOR ME, BUT NOT FOR THEE

Until independent testing in the real world can be performed, the benefits of Face ID over Touch ID remain to be seen.

Well, that’s kinda true. But Spence’s statement would sound far more convincing if he didn’t then spend the bulk of his article telling us that he was going to prejudge Face ID as a failure right now BEFORE independent testing in the real world could be performed. Here’s an example of how that works:

Environmental factors are going to come into play with lighting, clothing, eyewear and more all impacting on the efficiency.

You don’t know that.

YOU.

DON’T.

KNOW.

THAT.

So stop saying it. Can’t you wait? Can’t you just wait until independent testing occurs before you make that kind of judgment?

Remember when the AirPods debuted and the pundits had a field day telling us how AirPods were going to fly out of our ears and we’d end up having to buy replacements over and over again?

Yeah, good times, good times. And today what are pundits saying about losing AirPods?

(Crickets)

BEING UNFAIR

To be fair, being unfair is what we humans do. Criticizing technology before we’ve seen or touched it — as stupid as that seems — is the norm, not the exception. The less we know about a product, the more certain we are that we can judge its value. But just because we regularly and routinely do this, that doesn’t make it right. Especially for tech writers, who really should know better.

It’s a two-step process:

STEP 1: I don’t understand (whatever).

STEP 2: I must learn more about (whatever).

Ha, ha, ha! Just kidding! No, no. No one does that! Here’s how it actually works:

STEP 1: I don’t understand (whatever).

STEP 2: I’m not stupid.

STEP 3: (Whatever) must be stupid.

Behavioral psychologists call that cognitive dissonance. Aesop called it sour grapes. I call it clickbait.

SECURITY

Spence goes completely off the rails by maintaining that Touch ID is already doing everything Face ID is promising to do, so why bother doing it? He makes this same ridiculous argument not once, but three separate times.

What problem does (Face ID) solve that wasn’t already solved by Touch ID?

Security, Dude. Security.

Does Face ID offer a better solution to Touch ID?

Yes.

While Face ID solves the problem of the recognizing a user, don’t forget this problem was already solved with Touch ID.

No, it wasn’t.

For a tech writer to say the above is simply dumbfounding. Apple stated that Face ID was 20 times more secure than Touch ID. (A ratio of 1 error in 50,000 for Touch ID versus 1 error 1,000,000 for Face ID).

That’s TWENTY TIMES more secure.

And Spence thinks more and better security isn’t solving a problem? Has Spence even met technology? Of COURSE, better security is solving a problem. Saying otherwise isn’t disingenuous — it’s demented.

INSIPID IMAGINARY PROBLEMS

Remember when people were upset because they couldn’t open their phones while wearing gloves? Now they’re upset because they can’t unlock their phones while wearing ski masks.

But that’s nothing. I’m old enough to remember when people were warning us that bad guys would be able to activate Touch ID by cutting off our thumbs! Oh no!

Now, this is just speculation but — follow me here — I’m guessing that if bad guys simultaneously had access to both our phones and our thumbs, they could just place our thumb on the home button without having to cut it off first. Or they could just politely ask us to reveal our 4 or 6 digit pin numbers by threatening to cut off our thumbs. I’m thinking either of those options might be more realistic, right?

But that was then, and this is now. We’re not going to go down that blind alley again with Face ID, are we? The heck we’re not.

IN YOUR FACE

Today’s new “They’re going to cut off your thumb” is “Your girlfriend is going to unlock your phone while you’re asleep.” (Or, more likely, when you’re passed out.) Will that work? No. You have to have your eyes open to unlock the phone using Face ID. More importantly — and I don’t mean to go all “Dear Abbey” on you — if your significant other is going to try to unlock your phone while you’re sleeping, you’re in the wrong relationship.

(Note: Spence didn’t make this argument. But others have.)

CREEPY

Facial recognition is just plain creepy, and Apple is going to have an uphill battle convincing consumers that they want to store a complex 3D map of their faces in their phones. ~ Gizmodo

Well, that’s all perfectly true if by “going to have an uphill battle convincing consumers” Gizmodo means “going to make a boatload of money selling virtually every iPhone X Apple makes”.

SELLING YOUR DATA

Here’s another argument that Spence didn’t make, but since it comes up all the time, it’s best to address it.

Apple itself could use the data to benefit other sectors of its business, sell it to third parties for surveillance purposes, or receive law enforcement requests to access it facial recognition system — eventual uses that may not be contemplated by Apple customers. ~ Al Franken

Well, that might all be true…

…unless you watch the presentation or read Apple’s White paper on Face ID.

Your biometric data never leaves your device. Instead, it’s stored in an encrypted form in your phone’s Secure Enclave, where it can’t be accessed by your operating system or any of the apps running on your phone. And what’s stored in the Secure Enclave isn’t actually your fingerprint or your facial features. Touch ID and Face ID use your image and dot pattern to create a mathematical model of your face and that mathematical model can’t be reverse-engineered. The fact is, that all of this was known before Senator Franken wrote his letter and Apple said as much:

In its response letter, Apple first points the Senator to existing public info — noting it has published a Face ID security white paper and a Knowledge Base article to “explain how we protect our customers’ privacy and keep their data secure”. It adds that this “detailed information” provides answers “all of the questions you raise”. ~ Techcrunch

So all the questions were answered before they were asked. But let’s not let a little thing like facts stand in the way of speculation, grandstanding and fearmongering.

IN YOUR POCKET

Certainly the lack of Touch ID means unlocking a phone in your pocket, subtly under a table, or while being jostled in a busy commenting environment is now going to be a lot harder.

That’s it? That’s the best you’ve got? Face ID sucks because I can’t unlock my phone in my pocket?

Yeah, sure, because the last time I unlocked my phone in my pocket was…NEVER.

I could be wrong (I’m not) but most of mankind (womankind too) generally, you know, actually look at their phones when they’re using them. If your argument is that you can’t unlock your phone while it’s in your pocket, that should be a not-so-subtle clue that you’re bringing nothing to the table.

RE-EDUCATED

The user base will have to be re-educated and something that is ubiquitous in the smartphone community – fingerprint recognition – has been removed from the iPhone X.

Hmm. What a difficult problem this must be. Let’s see, let’s see. How could Apple possibly perform the difficult task of re-educating their users on how to use Face ID?

Oh, I know! They could ask them to, you know, look at their phones! Like the way they already do, like 20,000 times a day?

Wowza. Spence is seriously arguing that making people look at their phones requires re-education? C’mon, Dude. This smacks of desperation.

FUTURE POSSIBILITIES

The Touch ID sensor on the iPhone’s home button was great at what it did, but I think it had limited application elsewhere. It’s possible, for example, that it could have been used for biometrics, but I think that will be the province of the Apple Watch and maybe even the AirPods — two devices that are constantly in touch with one’s body.

The cameras and sensors in Face ID open up far more possibilities. Apple tends to introduce features that work when introduced and then those features become even more valuable when combined with features that are introduced later. This is the benefit of producing the whole widget. Apple can plan long term.

Right now, I can’t envision how Face ID will be used other than for security and AR. But that just reveals my lack of imagination and foresight. I may not be able to see exactly how the cameras and the sensors in Face ID will be used in the future, but I don’t have to be Nostradamus to see that they have the potential to do all sorts of new and interesting things. Apple isn’t just introducing a new feature that duplicates an old feature. With Face ID, they’re introducing the future.

APPLE IS DOOMED

While Apple focuses on design that benefits itself, Android’s adoption is increasing while the overpriced and over gimmicked iPhone X is going to be late and have a detrimental effect on Apple’s overall performance.

Say what now? Let’s just take a gander at some recent tech headlines.

CNBC Poll: 64% of U.S. households have at least one Apple product

CNBC Poll: 64% of U.S. households have at least one Apple product

RBC: Apple headed into a multiyear supercycle

Apple Is Sex” | Scott Galloway | Cannes Lions 2017 – YouTube

Survey: A record 78% of U.S. teens own iPhones

Survey: A record 78% of U.S. teens own iPhones

There isn’t a tech company in the world that doesn’t want to be “challenged” by competitors the way Apple is.

DEJA VU ALL OVER AGAIN

You know what’s going to actually happen? The same thing that always happens. Apple is going to remove the audio jack, be criticized and then be copied. Oh wait, that’s already happened. What I mean is, Apple is going to introduce Face ID, and everyone and their brother is going to first criticize it and then scramble to emulate it. It’s as predictable as the rising and the setting of the sun.

A STUDY IN STUPIDITY

When I look at the iPhone X I see a design that works for Apple’s benefit first, with the end-user in second. I see technical solutions that translate to buzz-words that challenge logic. I see new hardware that addresses an old problem but offers fewer benefits with its newer decision. I see design choices that are in place to emphasize Apple’s branding while weakening the consumer experience.

Well, that’s the kind of stuff you’re going to see when you have your head placed firmly up your derrière.

“The highest form of ignorance is when you reject something you don’t know anything about.” ~ Wayne W. Dyer

THE FUTURE

There’s an old saying that history repeats itself, but that’s not quite accurate. Rather, the lessons of history repeat themselves until they’ve been learned. I think we’re going to see this behavior again the next time Apple introduces something new. Some lessons, it seems, are never learned.

Podcast: Samsung Developer Conference, Alphabet-Lyft, Microsoft Windows 10 Creators Update

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the recent Samsung Developer Conference and Bixby 2.0, the Alphabet investment in Lyft and its implications for the ride-hailing market, and formal release of Microsoft’s Windows 10 Creators Update, Windows Mixed Reality headsets and the Surface Book 2.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News You might have missed: Friday, October 20th, 2017

Windows Fall Creator Update

Windows 10 Fall Creators Update includes a number of new features, including a replacement for OneDrive Placeholders, support for Windows Mixed Reality, the ability to have a better workflow between Windows 10 PCs and iPhones and Android phones and an improved Photos app experience. For enterprise users, the added security features with Windows Defender are probably the most compelling updates. Mixed Reality support is probably the feature that will be the focus on the holiday advertising.

Via Microsoft  

  • I have been using Windows 10 Fall Creator since it has been available to Insiders back in April and I have to say I have been enjoying most of all the UI improvements across the board. It really now feels like a much modern operating system although the things that never change make you realize that it is Windows after all – mostly the file structure and the language.
  • If I had to pick my top 3 features I would say improved inking, My People, and Photo Remix. Especially on Surface devices, inking is a game changer for things like document reviews. My People brings the people that matter to you to the front of your workflow in an intelligent way so that preference of communication channel is accounted for. While Photo Remix is not something I use every day, for me, it is the best example of how Windows 10 wants to move from productivity to creation.
  • While there are not many people using Edge on iPhone and Android to really benefit from a wider set of functionalities just the acknowledgment that there must be a focus between phone and PC is a huge deal. This is how people build their workflow today and Apple has been first in line in exploiting these changes to create a strong tie between Mac and iPhone and Mac and iPad. Yet, there are way more iPhone owners with a PC than a Mac and this should be a priority for Microsoft, especially when it comes to making Surface appealing.
  • I also think that Windows 10 Creator Update on Surface really shows the best of Microsoft which of course made the launch of the Surface Book 2 a perfectly timed one.

Surface Book 2

With the latest 8th Gen Intel Core processors and NVIDIA GeForce GTX 1050 and 1060 discrete graphics options, Surface Book 2 gives all the power you want or need in the design we have been accustomed to since the first Surface Book. Surface Book 2 with the 13” weighs 3.38lbs and the 15” Surface Book 2 at 4.2lbs. Surface Book 2 13” will be available for pre-order beginning November 9th in the United States and additional markets around the world, along with Surface Book 2 15” in the United States at any Microsoft Store and Microsoft.com. Delivery will begin on November 16th

Via Microsoft

  • This is the first true update to the Surface Book line. The original Book was updated about one year after launch but more to address some of the issues of the first generation that really deliver an update
  • I see the Book 2 as Microsoft delivering on the promise of serving a wide audience especially of demanding users in the top segment of the market. Giving options on form factors but also power and capabilities.
  • Not a device for everybody for sure, not just because of the price, but because not everybody needs that much horse-power.
  • The target audience I see aside from the obvious higher-end customers are users who are interested in Surface Studio but are too mobile to be able to justify their purchase. In tablet mode with the support of the Surface Dial, Surface Book 2 will deliver the same Studio experience in a more compact and mobile form factor.
  • With more PC vendors wanting to focus mostly in the high-end I am sure Book 2 will raise the bar on what we can see coming out from Lenovo, Dell, and HP in particular.

Samsung Creates Single IoT Platform: SmartThings Cloud

This week at its Annual Developer Conference in San Francisco, Samsung took the first step towards a company that can better connect the dots across different areas to deliver a superior experience. That step was the unification of the three pre-existing platforms SmartThings, Samsung Connect, and ARTIK under the new SmartThings Cloud platform.

Via Samsung  

  • While Samsung’s executives did not share many details on tools and SDKs a single “go to” platform should certainly make things easier for developers and partners.
  • What Samsung’s mobile lead DJ Koh outlined was a vision of consistency of experience across Samsung’s devices. Something that many have been expecting from Samsung for a while but that because of the way the company is structured had yet to happen. This is now finally changing.
  • SmartThings Cloud will not be limited to Samsung devices only. It is an open platform with open tools. I am sure over time we will get a better understanding of what functionalities will be limited to Samsung devices, which theoretically should have a superior experience as they can be fully controlled.
  • A critical part of the platform in my view is the security layer that ARTIK brings to developers and partners. This will appeal to both consumers and enterprises and could serve as a strong differentiator for Samsung. While security is rarely a selling proposition in the consumer segment, we see a much higher priority given to security when it comes to the home.

Samsung launches Bixby 2.0 and Project Ambience

Bixby was launched with the Galaxy S8 but suffered a slow rollout and high criticism for not being “smart.” Bixby 2.0 promises to be ubiquitous, personal. Bixby everywhere will be achieved in two ways. First, more Samsung devices will ship with Bixby inside – the first TV with Bixby will ship in 2018. Second, Project Ambiance will allow adding Bixby to non-Samsung devices either through a dongle or through a chip to be embedded in a product. Samsung did not disclose when either the dongle or the chip will be commercial.

Via Samsung  

  • The big challenge for Samsung in my view is to transition consumers’ thinking from Bixby as a user interface on one device to an interface across devices and then to a full-blown assistant.
  • One of Viv’s founders was on stage at the keynote talking about how all assistants today fail because they are not personal and not particularly smart. A statement that most would agree with. However, we have not had a chance thus far to actually see Viv in action and Samsung will still have to prove Bixby can deliver with the added Viv intelligence.
  • Ultimately, I still think Bixby’s strength will be in making our home experience less painful more frictionless. This can be a huge value add in its own right as we move from connected home to smart home. Setting the right expectations in the messaging will help consumers understand and appreciate Bixby for what it is. This seems something that even internally at Samsung has to be clarified as we heard different messages on stage.
  • The idea of being able to make devices that you have in the home smart by adding a dongle is quite smart in itself, especially when you talk about white goods that have a long replacement cycle. How this will work in reality beyond a speaker scenario like the one demoed on stage is unclear, however. The job of a speaker is quite easy but understanding, for instance, what adding the dongle to a fridge will help me do, is less obvious.
  • For other manufacturers, I think the appeal of Project Ambient will be to be the ability to add smartness at a lower cost than developing their own solution. Yet why they would add Bixby over an Alexa or Google Assistant is not clear to me. Especially given they are more likely to directly compete with Samsung than with Amazon or Google.
  • For now, of course, what Project Ambience shows is the power of having a semiconductor business that can deliver an end to end solution from chip to cloud platform.

 

 

An Interesting Battle is Shaping Up on 5G

In July 2014, the FCC released its Spectrum Frontiers plan, which allocated up to four large swaths of spectrum in the millimeter wave (mmWave) bands, above 20 GHz, for 5G. This spawned a bit of a land grab, with Verizon snapping up mmWave spectrum with the acquisitions of XO and Critical Path, and AT&T acquiring FiberTower’s assets. It was a windfall for these companies, sort of the tech equivalent of having held onto a house in a lousy neighborhood that suddenly gets hot. With these acquisitions, AT&T and Verizon now own close to 60% of the licensed mmWave spectrum. The FCC still retains about 1/3 of it, and plans 5G auctions at some point. Verizon and AT&T have been marching down the 5G road, testing fixed wireless access in several cities in the mmWave bands as one of the initial use cases.

But even though 5G had been heading in a mmWave, circa 2020 direction, mid-band spectrum, characterized as that below 6 GHz, is proving to be an important contender for 5G as well. T-Mobile was among the big winners in the 600 MHz auctions completed earlier this year, acquiring 31 MHz of nationwide spectrum. In August, the operator announced that it would deploy a ‘5G Ready’ network at 600 MHz, meaning that new equipment from Ericsson would be used that supports both LTE and 5G at that band. Also in August, the FCC opened an inquiry into new opportunities in the 3.7-4.2 GHz band, to be used for the “next generation of wireless services”. This effort is backed by Google and several wireless ISPs, who would want to use this spectrum for fixed wireless services. At the same time, T-Mobile and the CTIA are leading an effort to make the 3.5 GHz (CBRS) band more ‘5G friendly’ by lengthening the terms of the licenses and expanding the geographic service areas.

The upshot of this is that mid-band spectrum is emerging as a viable alternative for 5G. One can see the battle shaping up, especially if Sprint and T-Mobile merge, which is looking increasingly likely. Sprint/TMO’s main 5G play would be in their 600 MHz and 2.5 GHz spectrum, plus leveraging their holdings in other bands as well (it should be noted that TMO owns mmWave spectrum serving about 1/3 of the country, through MetroPCS). It’s not clear how active they would be in a future FCC mmWave spectrum auction.

This is setting up a pretty interesting marketing battle and debate over 5G. The mmWave bands offer a huge amount of spectrum, which would deliver orders of magnitude improvements in network speed, capacity, and latency. The tradeoff is that mmWave spectrum generally requires line-of-sight, can be affected by weather, and offers a small coverage radius. Providing service in these high spectrum bands will also require the deployment of large numbers of small cells—and we haven’t yet found the formula to be able to do this at scale, yet. There is also still quite a bit of work to be done to develop the beam-forming antennas and other technology required to deliver wireless services in the mmWave bands.

So, what does this mean for the 5G rollout? We will see services marketed as 5G, even using sub-6 GHz versions of 5G New Radio, starting in 2018. AT&T has already prepped us by launching ‘5G Evolution’ in a handful of markets. In reality, these 4.5G, or Gigabit LTE services will offer considerable improvements in download speeds and latency, which are certainly in the neighborhood of what has been envisioned for the early stages of 5G.

It’s also becoming clear that there will be different flavors of 5G. Gigabit LTE, and other services offered in the mid-bands, will look more like today’s cellular services, supporting broad coverage and mobility. Think of it as a base layer. Then, mmWave band networks will be built in denser urban areas and other targeted coverage deployments, where it makes the best economic sense and where the most subscribers can be reached. The map will look like ‘islands’ of 5G in a sea of LTE and LTE Advanced.

It will be well into the next decade before there is broad coverage of mmWave-based 5G, and there is still some question regarding the extent to which mobility can be supported in these bands or how good the coverage will be in buildings. But thinking about 5G in this way, and with this timeframe, provides a good runway for the technology to evolve. Consider that the average LTE speed is 4-5x what it was only five years ago. Apply that multiplier to 5G, as a base case, and things start getting interesting.

In the meantime, fasten your seatbelts for the upcoming marketing war between T-Mobile/Sprint and Verizon/AT&T, over 5G. With no official body really calling the shots over the definition of 5G, it will be up to the market to decide.

Samsung Aims for Connected Thinking at Developer Conference

Samsung is holding its annual Developer Conference this week in San Francisco. At the day one keynote on Wednesday, it pushed a vision centered on “Connected Thinking” as its major theme for not only the conference but its strategy in relation to its software and services in the coming year. That was reflected in a range of moves designed to bring what have been disparate parts of Samsung together, but it’s apparent that this will be a tall order.

A Single Cloud Platform and Bixby as Connective Tissue

Samsung’s major announcements focused on three key topics:

  • Consolidating Samsung’s disparate Internet of Things cloud offerings
  • Iterating on Bixby, by improving the technology, extending it to new devices, and opening it up more
  • Going all-in with Google on AR through ARCore support on all of this year’s flagship phones.

The Internet of Things moves are focused mostly on using the SmartThings brand (now without Samsung as an umbrella brand) as the consumer lead, while consolidating three separate cloud IoT platforms into one, also now tagged with the SmartThings brand. ARTIK survives as a separate IoT brand, but now focused mainly on modules, while its cloud platform along with the Samsung Connect Cloud announced earlier this year will be folded into the SmartThings Cloud.

The way I see this is that the SmartThings Cloud will be the invisible connective tissue on the back end, while Bixby 2.0 eventually becomes the visible connective tissue in the front end as part of a much more coherent and connected vision for Samsung’s range of devices. Samsung executives pointed out during the keynote that it has arguably the largest number and range of devices in use of any company in the world, but the reality is that it’s always been a pretty disparate range of devices, with only fairly superficial integration between them. A big reason for that is Samsung’s operational structure, which has separate CEOs for each product-centric business unit.

The vision Samsung is pushing now is one where a variety of services on these devices will all be powered by the same cloud back-end, and Bixby will become a cloud-based voice interface which works on more and more of them over time. Bixby 2.0 will shift its personalization and training from the device to the cloud, and will therefore start to build profiles of individual users which can be exposed on a variety of devices, including shared devices like TVs and fridges. In addition to its own devices, it’s going to try to extend Bixby support to a variety of third party devices through modules and dongles as part of what it called Project Ambience, which will Bixby-enable existing home devices, both smart and dumb ones, and connect them to each other.

Significant Challenges Lie Ahead

What’s interesting here is that, even though Samsung controls the operating systems on several of its devices, because it doesn’t control by far the biggest – Android on its smartphones – it is instead building the connective layer between its various devices at the interface level. That means pushing Bixby to become far more than it’s been so far, acting not only as a way to perform tasks previously done through touch on a phone, but increasingly allowing for integration with other Samsung devices like TVs and control of smart home gear through SmartThings integrations.

In reality, though, voice can’t be the only interface and therefore can’t be the only connective layer between these various devices – in time, the integration therefore either needs to grow beyond Bixby, or Bixby itself needs to evolve to the point where it’s more than just a voice interface. In the meantime, the SmartThings brand, now decoupled from the Samsung brand to foster a sense of openness, will nonetheless become the brand for Samsung’s own connected home ecosystem too (replacing Samsung Connect), which may cause some customer confusion.

But those aren’t the only barriers to making this vision work: Samsung needs to overcome both internal and external hurdles if it’s to be successful in creating a truly connected ecosystem. The biggest internal barriers continue to be structural – hearing Samsung executives talk about this week’s announcements both on stage and one-on-one, the language is still far more that of separate companies “partnering” rather than a single team working together. The integration announced this week represents progress, but there’s a long way still go go and huge cultural barriers to overcome.

Externally, Samsung needs to convince developers and hardware partners that Bixby is ready for use as a voice platform beyond its smartphones, at a time when it’s got big shortcomings even there. Deeper integration of the Viv technology will certainly help to improve its functionality, as will opening up version two earlier to developers so that the integration can be deeper when it launches to consumers. But the leap Samsung is contemplating here is a huge one, one which other platforms have approached much more gradually and incrementally than Samsung is proposing to do. Samsung would arguably be better served by tackling either third party integration or cross-device support first and then pursuing the other second, rather than trying to do it all at once. The current approach risks over-promising and under-delivering.

The last big challenge is one of adoption – unlike earlier voice assistants, Samsung can’t simply add Bixby to existing hardware, because little of it was designed with voice interfaces in mind. What that means is that it can only grow the Bixby base to the extent that it can grow the base of devices which offer it. In categories like TVs and fridges, that means waiting until next year to even start selling them, and with long refresh cycles, it’ll take many years before penetration is meaningful. Even in smartphones, where Samsung has an installed base of hundreds of millions, it has just 10 million users of Bixby, and we don’t even know how many of those use it daily or weekly. Even if the new SmartThings and Bixby ecosystems work exactly as intended, it will be quite some time before any significant number of consumers actually get to benefit from them.

Amazon’s Alexa Land Grab

Without a doubt, in my mind, Amazon’s Alexa has been the star of the tech industry this year. Starting with CES where the banner of “works with Alexa” was first raised with support from nearly all major appliance and smart home brands. The presence of an ambient, always on, smart speaker with digital assistant has been the single greatest catalyst for the smart home I’ve seen since I’ve been studying the smart home from the beginning of the category. From a consumer perspective, the voice interface eliminated a great deal of friction from how we interact with smart home objects. Our continued research on the category keeps confirming those with Amazon Echo’s have more connected smart home products per household than those who do not and those customers rapidly increase their smart home gear after buying and integrating an Amazon Echo into their home.

As I wrote last week, the home has become the latest battleground for every company making consumer electronics, and honestly, at the moment, Amazon is the clear leader. The industry, and many consumer electronics makers seem to be ackowledging Amazon’s lead as integrating Alexa seems to be foremost on the strategy of many companies. Garmin just introduced an interesting product called the Speak which puts an Alexa assistant in your car. Sonos joins the list as well with now over 500 products having support for Alexa skills and a now more than a few dozen with Alexa directly indegrated into their products.

Amazon is doing a great job partnering here and will leverage the ecosystem of support to help them go beyond their own first party hardware to integrate Alexa everwhere. While many may not believe Amazon’s early lead here will be sustainable, I believe it will on the basis of their business model being cloud first and hardware second. Amazon’s cloud strategy means Alexa gets better overtime. So just like how your smartphone or PC/Mac gets new capabilities through software updates so do your smart products connected to Amazon’s cloud. Amazon can rapidly improve the features and intelligence of these products and simultaneously make them all work better together all through their cloud engine. This is a strong and compelling value proposition. Amazon’s early lead and industry support is going to be a challenge for Google and Apple.

For Google, the real challenge is they have lost trust and favor with many in the industry who realized very quickly they were bad partners when it came to Android. Fascinating history repeating itself from the Microsoft era as for decades computer companies only had Microsoft as a platform option, and they got fed up with them, that they desperately wanted an alternative. In walks Google with Android and hardware OEMs now had another option and they took it in mass. Now all hardware OEMS are fed up with Google and want another option for the next platform (which is voice and machine learning/AI) and in walks Amazon with a another option and the industry is taking it in mass.

This is another key reason Amazon’s lead is sustainable. The industry is clearly standardizing on their cloud for machine learning/AI smarts and Alexa as the front end interface to the machine learning/AI platform, and Google will just be relegated to search from an assistant standpoint and not managing your home or your life. While Google remains in the battle, I’m fairly convinced Amazon will win this next platform that hardware companies will use in the wave beyond smartphones. What happened to Microsoft in mobile will happen to Google in the next wave.

So what about Apple? I’ve been worried for Apple for sometime in the home because if Amazon win’s this with a vast hardware supported ecosystem and more consumers come to depend on Alexa as their assitant then Siri could be in trouble. My biggest worry for Apple is that Siri does not get the reach that will be necessary in this next wave which will have a heavy element of machine learning, which leads to AI, and voice assistants as a primary interface. This is not to say the iPhone will become irrelevant, or that much of Apple’s on device machine learning and eventually AI technology won’t be useful, but that I worry Siri will not become the primary agent consumers interact with for their life/home assistant.

The battle for the home will be one where companies have to partner well and Apple has struggled at this given the vertical nature of their business. One could make a strong argument they need to allow Siri to be integrated into third party smart home hardware in order to get the reach Siri needs. You could use CarPlay as an example, but Apple’s challenge with partners is evident with many CarPlay solutions as the experience with CarPlay is inconsistent and we constantly hear from consumers how it doesn’t always work. Apple does not control all the variables like they are used to with auto manufactures and the same problems arise when trying to accomplish reach in the smart home.

I certainly am not counting Apple out the way I am Google at this point, but I do worry that the iPhone/iOS/Siri, etc., become secondary agents and not primary ones in this next wave of computing. Apple still has a lot of work to do here particualrly around cloud and first party cloud services. There is still time but they are not making the same kind of headway Amazon is at the moment. Amazon’s lead is not too great at this point but it can get there quickly. Next years CES will be very telling about how far Amazon’s lead has become and how much farther it can get in 2018.

Now Everyone Wants to Design Silicon

The move my major technology companies to start designing some custom silicon components vs. buy off the shelf components from suppliers has been a long time coming. One of the biggest challenges in the competitive field of consumer electronics is when competitors all use the same components and software platforms as their competitors. Companies competing for the consumer will live and die by their ability to be different and stand out from the pack. When you use the same software platforms and components as your competition you simply swim in the sea of sameness and have a hard time standing out. This is why Apple has developed a fully mature and foundational strategy to design all of the most critical and differentiating components that give their products an edge in the market. So it comes as no surprise that Google has developed a custom SoC for the image processing part of their new Pixel 2 smartphones.

While Apple has acquired the rare capabilities to design not just their image processor but also their CPU, GPU, and a host of other critical silicon components I don’t expect Google to go so far. I also expect other smartphone OEMs to follow suit and develop any number of custom parts from a dedicated machine learning/AI chip, image sensor, FPGA, or any number of custom ASICs that suit their needs and help them create a differentiated experience. Thinking about who may jump into the waters of designing silicon I can think of Amazon and Microsoft as potentials.

This trend was foreseeable because ARM has wanted to enable it for some time. Many of these companies designing silicon are not truly doing anything incredibly exclusive but they are tweaking generic ARM IP and customizing it for their needs. ARM will let anyone acquire a standard license and use their IP like Lego blocks to start putting together a unique solution for their needs. This is exactly what most companies are doing and what will enable many more to do so as well. But this shift underlines an important shift in how we think about what underlying components make consumer hardware stand out and become differentiated.

From Chips to Solutions
Back around 2012, I had written client notes to all the major hardware brands where I outlined the need for a component strategy to move from a few key chip decisions to a broader chip solution as a whole. The industry at the time was using the term heterogeneous computing which, simply put, meant building a computing solution that required some pieces of silicon, not just one system-on-a-chip that held all the most important parts. The idea behind heterogeneous computing was the unbundling of the SoC and moving certain functions of the SoC to dedicated chips. As Moore’s Law enabled silicon to get smaller and smaller yet remain powerful, chip companies like Intel and Qualcomm found themselves having to make trade-offs to what they designed onto the SoC and what was to be left off. This opened the opportunity for heterogeneous solutions that we see now commonplace. The big difference between now and a few years ago, is companies are now looking to design the co-processors or other compute engines no longer on the SoC themselves to deliver a specific experience they have in mind.

This move means that the value is not so much in the SoC any longer but in the total sum of all the components working together to deliver a specific differentiated experience. So Google can put together a device that has the core SoC which has the CPU, GPU, modem, IO, etc., and then design or buy the co-processors they want to deliver something unique. It is in the decisions that go beyond the SoC which is now the things that will differentiate competitors.

There was a time during my career as an analyst where I focused solely on the architecture of a single chip design when I did my analysis of a design. Over the last few years, the focus has shifted from the chip itself to the entire solution (all the chips and sensors) used in a computer. Moving from the single chip design to the design of how all the chips work together in a heterogeneous computing environment is where the exciting, and challenging, work is being done for hardware manufacturers.

One could argue that this trend will require even better software engineering than at any point in computer history. While many companies will try to make specific components, the true payoff for innovation will come from those who best can make all the pieces work best together as a complete whole. Only a few tech companies do this well, and others are trying to learn. Those who do it well will fend off types of disruption easier than those who don’t.

Many years ago, it would have been impossible to predict that the future of consumer hardware brands depends on their ability to design silicon, but that is now the undeniable reality.

The Good, the Bad and the Ugly of Twitter

Last week, actress Rose McGowan’s Twitter account was suspended for 12 hours after she wrote a series of tweets accusing Hollywood’s producer Harvey Weinstein of raping her. When she said she was being silenced, Twitter responded that her account was suspended as one of her tweets included a private phone number, which violates the code of conduct. This explanation did not convince many, however. At a minimum, it raised questions about the timing of it all. CEO Jack Dorsey took to Twitter to admit that his platform “needs to do a better job at showing that we are not selectively applying the rules.”

It is not the first time that Twitter is under fire not so much for lack of clarity on what makes up a violation of the code of conduct but for lack of consistency on how those violations are dealt with when reported. Over the past year, as harassment increased, Twitter deployed a series of measures, like the ability to mute a conversation or a user, that seemed to be aimed at hiding the issue rather than addressing it. Just because I no longer see the abuse and harassment, it does not mean it has gone away. More importantly, those users who are harassing and abusing others feel that their behavior is condoned.

Fresh off the press there is a Twitter internal email obtained by Wired that outlines new rules Dorsey is readying to release but I will wait for an official communication before commenting.

Social Media Engagement

Social Media drivers differ from people to people and from network to network. I was a reluctant Twitter user. I started using the platform for work in 2009 but did not do so consistently until 2013 when I changed job. Twitter quickly became a useful tool to keep on top of the news. My initial passive networking experience turned into an engaged one as I came to appreciate being able to share my thoughts on the tech world and actively engage with fellow tech watchers. As my engagement grew, I set some rules for how I wanted to use the platform:

– Never say anything I could not stand behind in case it was published as a quote in the press

– Keep it clean-ish

– No Religion

– No Politics

Pretty simple stuff, right? Eight years on, I am proud to say that except for the last rule I have been quite diligent in following them. I am sure that, given the current state of affairs in all the countries I lived in over the years, being silent rather than breaking my own politics rule would have been the real crime!

In a recent report published by GWI, I discovered that I was not alone in my reliance on Twitter for news. Twitter users are first engaged in reading news stories (57%) followed by liking a tweet (40%) and watching a video (34%) Direct actions such as tweeting a photo or a comment about my daily life only make up 23% and 22% of activities, respectively.

Overall engagement on Twitter has been declining since 2013 (-5%), a problem that the company has been trying to address without much success. That said, engagement on Facebook over the same period has been declining even more rapidly (-16%) as consumers seem to lean more towards more videos and pictures focused platforms such as YouTube and Instagram, up 2% and 14% respectively.

It would be too easy to blame the lower engagement to harassment alone, but I am sure nobody would argue with the fact that harassment is making Twitter less appealing as a platform. Quite a few celebrities have found Twitter too ugly, and either left like Kanye West, Lindsay Lohan, Emma Stone, and Louis C.K. or took breaks and returned, like Leslie Jones, Justin Bieber, and Sam Smith. For now, the return of investment the platform is providing me is still positive. The question is, for how long?

Disasters, Emergencies, and Hashtags See the Best of Twitter

Over the past few months, we have had our share of disasters and emergencies to deal with both in the US and internationally. It is at those difficult times that I tend to see the best of Twitter. From breaking news that allows people to keep up to date with a fast-evolving situation to people coming together to help by sharing stories or ways to donate.

But even in those good moments, trolls accounts creep into the conversation to dismiss, offend or sabotage the effort.

On the back of the Rose McGowan’s incident, two hashtags emerged bringing attention to harassment on the platform and sexual harassment across the board. On Friday the 13, a #WomenBoycottTwitter started calling on women to walk away from the social media platform for a day. Many users, including celebrities, joined in. Not everybody though agreed that silence was the best tactic to make a point in this particular situation. I for one decided not to be silent and went on Twitter to condemn abuse and do what I do every day: talk tech. I thought that at a time when many women are being brave in speaking up against abuse, remaining silent was not something I was comfortable with. Also, when it comes to Twitter it only matters who is on it not who is not. In other words, you do not notice who does not Tweet. Some also were uncomfortable with the fact that the uproar against abuse was somehow limited to white women when minorities and the LGBT community have been victims of abuse on the platform for a long time.

The original intent was clear and deserves the utmost respect, but the execution was possibly not the best. So by Sunday night, Alyssa Milano encouraged people to reply “me, too” to her tweet about being a victim of sexual harassment or assault as a way to show how pervasive the problem is. A new meme was born: #MeToo. Voices were heard from women, men, straight and gay across countries like the US, UK, Italy and even more conservative France. The conversation was not limited to Twitter; it took over Facebook as well engaging more than 4.7 million people.

Burst Your Bubble…Read Some comments

Twitter succeeded in giving a voice to so many people making it clear that sexual harassment is not just a Hollywood or Tech industry issue and impacts individuals across the world. But even in that strong testimony, the ugliness of Twitter came through. Just take a look at some of the replies posted to comments of more famous women like Italian actress Asia Argento, and you quickly have a feel for how ugly people can be when they can hide behind a Twitter handle.

Very often we live in our cocoon of lists of people we follow because we respect them, share their views or are interested in what they do or say. Without knowing it, we are sheltering ourselves from all those individuals who more likely than not do not share our views, our believes, our values. And I am not talking here about which smartphone ecosystem you prefer but big stuff like politics, religion, sexual orientation.

Sometimes that bubble bursts as we get trolled or right out attacked for our views. Others, we are lucky, and we just never see the ugly side of Twitter. That does not mean it does not exist. Like we have seen since Sunday just because you do not have a story to share under the #MeToo meme it does not mean millions of people in the world don’t have one to share.

The Dawn of Folding Laptops and Smartphones

In a column I wrote recently on the 25th anniversary of IBM and Lenovo’s ThinkPad, I mentioned that I had a chance to interview Mr. Arimasa Natioh, who is considered the father of the Thinkpad. He created the special Yamato, Japan lab that designed this laptop and has shepherded its design and growth from the beginning.

At the end of the interview, I asked Naitoh-San what technology he has his eye on that he sees on the horizon that could impact future designs of the ThinkPad? He said that he believed that someday the technology would be available that would allow them to create a foldable laptop that perhaps could even fit in your pocket.

When he made this comment, I have to admit that my first reaction to this idea is that if this technology could ever become available, it would probably be at least another ten years into the future.

However, the idea of a foldable laptop or even a foldable smartphone intrigued me, and so I got on the phone last week with key suppliers I know in Asia and asked them about this idea and what their thoughts were on this concept. To my surprise, they told me that they are working on foldable OLED screens now and that they could have them ready for the market as early as early 2019.

This idea is not new and has been pretty much a vision for mobile manufacturers for some time. Indeed, Samsung has hinted that they could have a foldable smartphone on the market by late 2018. But when I heard these comments or rumors on this concept of foldable products I pretty much saw this as something much farther out in the future and not actually on the horizon.

It appears from talking to a couple of suppliers that indeed both smartphone and laptop computer makers are now seriously focused on finding ways to design new types of products that could take advantage of a foldable OLED screen. They even have a term of endearment to describe this kind of screen and call it FOLED.

Most smartphone and laptop vendors are feverishly trying to come up with multiple product ideas or concepts that could take advantage of this new component. Now I admit that I am still a bit skeptical that they can create a foldable OLED screen in this rumored time frame, but I no longer think that this is a pipe dream or that it is ten years out in the future.

At the laptop level I think Naitoh-San’s idea that a laptop could be designed to fit in your pocket could be highly futuristic, but I could imagine a laptop that today has a 13 inch screen being folded in half, which would make it much smaller to cart around and even be lighter and thinner in the not too distant future. And the idea of a foldable smartphone with as much as a 7” screen being folded and still fitting in a pocket now seems more feasible shortly.

What is most intriguing to me about the idea of a FOLED screen is that it gives laptop vendors and smartphone makers a utterly new component that they can let their imaginations run wild with and start a new round of innovation around mobile computing designs.
While a 7-8inch smartphone in its current format sounds ridiculous today, if it can be folded in half and fit comfortably in a pocket or purse and then unfolded so that it could turn into more of a tablet, it could move the smartphone into the realm of a serious productivity tool.

And if new laptops with FOLED screens could become smaller to carry around and made, even more, portable, especially in a 2-in-1 detachable format, it could change the way people work with their laptops in the future so that they become more versatile yet still deliver the kind of power needed for high-level productivity tasks.

Now I admit that while I do study design concepts for mobile as part of my research, I am pretty lousy at actually forecasting innovative designs. But I can imagine that if you give Jony Ive at Apple or the design gurus at most smartphone and PC makers a new palette of components to work with such as a folding OLED screen, they could turn out some pretty amazing new mobile products shortly.

Given the possible evolution, FOLED screens could deliver to mobile designers; I suspect that the current notion that tech companies are no longer innovating may fall by the wayside fast. And knowing that this type of component is just around the corner makes me even more excited about our mobile future and how these new products could impact our more mobile lifestyle.

Tech Inevitability Isn’t Guaranteed

It’s a story that would have been hard to believe a few years back.

And yet, there it was. eBook sales in the US declined 17% last year, and printed book sales were up 4.5%. What happened to the previous forecasts for electronic publishing and the inevitable decline of print? Wasn’t that widely accepted as a foregone conclusion when Amazon’s first Kindle was released about 10 years back?

Of course, there are plenty of other similar examples. Remember when iPad sales were accelerating like a rocket, and PC sales were declining? Clearly, the death of the PC was short at hand.

And yet, as the world stands five years later, iPad sales have been in continuous decline for years, and PC sales, while they did suffer some decline, have now stabilized, particularly in notebooks, which were seen as the most vulnerable category.

Then there’s the concept of virtually all computing moving to the cloud. That’s still happening, right?

Not exactly. In fact, the biggest industry buzz lately is about moving some of the cloud-based workloads out of the cloud and back down to “the edge,” where end devices and other types of computing elements live.

I could go on, but the point is clear. Many of the clearly inevitable, foregone conclusions of the past about where the tech industry should be today are either completely or mostly wrong.

Beyond the historical interest, this issue is critical to understand when we look at many of the “inevitable” trends that are currently being predicted for our future.

A world populated by nothing but completely electric, autonomous cars anyone? Sure, we’ll see an enormous impact from these vehicles, but their exact form and the timeline for their appearance are almost certainly going to be radically different than what many in the industry are touting.

The irreproachable, long-term value of social media? Don’t even get me started. Yes, the rise of social media platforms like Facebook, Twitter, SnapChat, LinkedIn and others have had a profound impact on our society, but there are already signs of cracks in that foundation, with more likely to come.

To be clear, I’m not naïvely suggesting that many of the key trends that are driving the tech industry forward today—from autonomy to AI, AR, IoT, and more—won’t come to pass. Nor am I suggesting that the influence of these trends won’t be widespread, because they surely will be.

I am saying, however, that the tech industry as a whole seems to fall prey to “guaranteed outcomes” on a surprisingly regular basis. While there’s nothing wrong with speculating on where things could head and making forceful claims for those predictions—after all, that’s essentially what I and other industry analysts do for a living—there is something fundamentally flawed with the presumption that all those speculations will come true.

When worthwhile conversations about potential scenarios that may not match the “inevitable direction” are shut down with group think (sometimes from those with a vested interest at heart)—there’s clearly a problem.

The truth is, predicting the future is extraordinarily difficult and, arguably even, impossible to really do. The people who have successfully done so in the past were likely more lucky than smart. That doesn’t mean, however, that the exercise isn’t worthwhile. It clearly is, particularly in developing forward-looking strategies and plans. Driving a conversation down only one path when there may be many different paths available, however, is not a useful effort, as it potentially cuts off what could be even better solutions or strategies.

Tech futurist Alan Kay famously and accurately said that “the best way to predict the future is to invent it.” We live and work in an incredibly exciting and fast-moving industry where that prediction comes true every single day. But it takes a lot of hard work and focus to innovate, and there are always choices made along the way. In fact, many times, it isn’t the “tech” part of an innovation that’s in question, but, rather, the impact it may have on the people who use it and/or society as a whole. Understanding those types of implications is significantly harder to do, and the challenge is only growing as more technology is integrated into our daily lives.

So, the next time you hear discussions about the “inevitable” paths the tech industry is headed down, remember that they’re hardly guaranteed.

Q3 2017 Earnings Preview

We’re about to kick of earnings season for Q3 2017, and so I’m doing my usual quarterly preview. My focus here isn’t so much predicting what we’ll see as suggesting the things to look for when these companies report. As usual, I’ll tackle the main companies I track in alphabetical order.

Alphabet

Alphabet’s last set of earnings was pretty impressive, with strong performance pretty much across the board and no obvious areas of weakness. Key trends continued, with ad revenue from  Google’s own sites continuing to grow much faster than ad revenue from third party sites, though growth in the latter has accelerated recently. In some ways, the most interesting revenue line to look at is the “Other” bucket within the Google segment, because that’s where a lot of Google’s new focus areas sit, including its first party hardware push and enterprise cloud services. Both have been major focus areas for Google in the past year, but Q3 should have pretty low hardware revenue given it will have been the lull before new hardware was launched earlier this month, so if there’s strong growth here, that’ll be a good sign cloud services are finally starting to grow commensurate with the investment Google is now making in this area. As I noted last week, one other thing to look out for is traffic acquisition costs in the Google sites business, and how these are tracking relative to revenue.

 

Amazon

Amazon demonstrated conclusively last quarter that it’s in the midst of another period of higher investment and therefore lower margins – its revenues grew strongly, but its profits were way down. Investments in AWS capacity, fulfillment infrastructure and employees, and even heavier hiring in higher-paid headquarters roles like software engineers and sales people for AWS and advertising all drove up costs and drove margins down to levels we haven’t seen in a couple of years. It’s likely we’ll see some of those same trends this quarter for the same reasons, with Amazon likely hiring significantly ahead of the holiday season. One of the biggest things to watch for this quarter is how Amazon will report the financials of the Whole Foods business – my guess it that it will simply be an additional reporting line along the lines of AWS, but we’ll have to wait and see. It has historically been more profitable than Amazon’s core business, so it should provide something of a boost to overall margins, again like AWS.

Apple

To my mind, by far the most interesting thing to look at in Apple’s earnings will again be its guidance. Its overall revenues and profits for this past quarter should be fairly predictable and in line with the outlook it provided a quarter ago, but there’s a small possibility it will be at the low end of its guidance if iPhone 8 sales in the first few days on sale were less than expected. However, the December quarter is entirely unpredictable at this point, with the iPhone X going on pre-sale right before the earnings call and on retail sale right after. Apple will certainly know what kind of supply it’s likely to have in the remainder of the quarter for that device, and that in turn will to a large extent determine how Apple’s overall December quarter goes. Weak supply could depress overall iPhone sales as many would-be buyers wait out the supply constraints, while strong supply would give Apple a massive quarter off the back of both strong sales and much higher ASPs (I wrote about all this in detail in an earlier piece). Early indications of Watch Series 3 sales and an ongoing reduction in the rate of revenue decline from China are also worth watching for.

Facebook

Coverage of Facebook in the news recently has been dominated by things that have nothing to do with its financials, at least from a direct perspective, and I’d expect its earnings call to feature a few questions about Russian influence and whether the measures Facebook is taking to mitigate that will have any longer-term impact on its ad business. But ad load saturation and its predicted effect on ad revenue growth is the thing many investors will be watching for, and I’d also expect lots of questions about Facebook’s big video push and the effect that will have on margins as the company invests heavily in content and projects lower margins due to revenue sharing. We got some hints about that on last quarter’s call but I’d expect more detail this quarter as Facebook moves this project along. It’ll also be interesting to see how many new hires it had in Q3, given that it promised faster hiring in the second half of the year.

Microsoft

Microsoft continues its transition from a product-driven to a services-driven company, but the headline on all of its earnings releases for the last two years has been all about the cloud. Microsoft’s growth rate in cloud services ticked up significantly last quarter, and one of my big questions then was whether that was a blip or a sign of a change in trajectory – my guess is the former. Meanwhile, the phone business is finally far enough in the rearview mirror that it should no longer be a drag on the business, while Surface revenue growth following recent product launches should turn positive again this quarter after some declines driven by shifting release cycles. The PC business overall, which of course is a major driver of Windows revenue, continues to be somewhat unpredictable animal from quarter to quarter, and foreign currency has been an ongoing drag on Microsoft’s results overall too.

Netflix

Netflix will kick off earnings season on Monday afternoon, and its guidance was for just under 4.5 million new subscribers, the vast majority overseas. Netflix’s guidance, though, has been somewhat poor lately, missing on both the high and low side, and it’s always possible that it could see significantly fewer or more net adds. My guess is that it might overshoot its guidance slightly in both geographies, but there’s no reason to expect a significant departure. The bigger question is what its guidance for Q4 looks like, given that it’s just announced price increases which will come into effect in the quarter. I wrote about those price increases here last week, and overall I’d expect them to take a hit to net adds in the US (the only region where the price increases are happening) in the quarter, but still to generate positive net adds there given that it’s usually a healthy quarter for growth. However, the impact this time around will likely come all in the fourth quarter, unlike Netflix’s last increase, so it’s possible that we’ll see a bigger and more concentrated impact this time around. I’d also expect management to be asked about the mix of customers between Netflix’s three service tiers – SD, HD, and 4K – given that the price increases affect these three bands differently.

Samsung

Samsung has already pre-announced very strong earnings, as well as the impending departure of its CEO. But as usual we’ll have to wait for the full results before we know how the different business units fared. Based on recent results and overall market trends, it’s very likely that both strong demand for and increasingly high prices for memory were major drivers, with the smartphone business likely also having a good quarter off the back of the Galaxy S 8 launch earlier this year. I’d expect there to be some questions about how Samsung will replace its CEO, who suggested in his resignation letter to employees that he felt it was time for some younger blood at the helm.

Snap

Snap has been struggling ever since it went public to generate rapid user growth, while some market observers have recently reduced their forecast for its ad revenue growth rate as well. There’s been nothing to suggest any of the underlying trends will have changed much in Q3, with no new obvious revenue generating or growth-inducing features released in the quarter. Ongoing rollout of Snapchat’s self-service and automated tools for ad buying should continue to help drive revenue per user, and I would hope that the company will also be more transparent about some of its engagement metrics, which it hasn’t updated consistently as a public company. CEO Evan Spiegel has recently acknowledged that he needs to be more communicative now that Snap is public, and I hope we’ll see evidence of this during earnings – Snap’s earnings releases so far have been utterly spartan affairs, and management has been cagey and standoffish during the calls.

Twitter

Twitter continues to be more or less stuck in the same difficult spot as last quarter, with user growth likely not much better, and revenue per user likely continuing to be fairly stagnant. The company says it’s in the midst of yet another strategy shift and revamp of its ad tools, but has shown little concrete evidence that the shift will generate better results going forward. Meanwhile, it’s tinkered at the edges of its product, making changes which aren’t likely to change engagement or user growth meaningfully while leaving larger issues such as abuse and the complexity of on-boarding as a new customer unfixed. Live video has been a big focus, and Twitter will likely update the metrics it has shared previously on this topic, including the number of unique viewers. But we still need to see more information from Twitter about how much time those viewers spend watching video, and whether those views are generating meaningful revenue. I also live in eternal hope that Twitter might at some point finally provide daily active user numbers!

Virtual Reality’s Desktop Dalliance

The hardware landscape for virtual reality evolved dramatically in just the last few weeks, with new product announcements from Samsung, Google, and Facebook that span all the primary VR platforms. While the new hardware, and some lower pricing, should help drive consumer awareness around the technology, perhaps the most interesting development was both Microsoft and Facebook demonstrating desktop modes within their VR environments. These demonstrations showed both the promise of productivity in VR and the challenges it faces on the road to broader adoption.

Microsoft’s Mixed Reality Desktop Environment
For the last few months, Microsoft and its hardware partners have been slowly revealing more details about both the Mixed Reality headsets set to ship later this month and the upcoming Windows 10 Fall Creator Update that will roll out to users to enable support of the new hardware. At a recent event in San Francisco, Microsoft announced a new headset from Samsung that will ship in November, which joins products from HP, Lenovo, Dell, and Acer that will ship in October. During that event, Microsoft Fellow Alex Kipman gave attendees a tour of the Cliff House, the VR construct inside Windows 10 where users interact with the OS and their applications.

At the time, it seemed clear to me that one of the obvious advantages Microsoft brought to the table was the ownership of the OS. By having users move within the OS virtually, you decrease the number of times the user must jump between the 3D world of VR-based apps and the 2D world of today’s PC desktop environment. More importantly, the Cliff House also offered a productivity-focused room where you could go and set up a virtual desktop where you utilize your real-world keyboard and mouse to use traditional desktop apps. Essentially a desktop space where your monitor is as wide and as tall as you desire to make it, providing the virtual real estate for a multitasking dream (or nightmare, depending on your perspective). Microsoft noted at the time that the number of apps running in such a scenario is limited primarily by the PC graphic card’s ability to support them. I couldn’t test the actual desktop environment at that event, but it certainly looked promising.

Facebook Announces Oculus Dash
At this week’s Oculus Connect conference Facebook offered its market response to Microsoft, announcing a permanent price cut to its existing Oculus Rift product ($399), a new standalone VR product called Oculus Go ($199), and additional details about its future Rift-caliber wireless headset code-named Santa Cruz. Just as important, though, was the company’s announcements about updates to its platform (Oculus Core 2) and its primary interface mechanism (Dash) that includes a desktop environment. With these announcements, Facebook rather effectively addressed Microsoft’s perceived advantage by introducing a VR environment that appears, at least from the on-stage demos, to bring many of the same interactive features as Microsoft’s to the Oculus Rift. I wasn’t at the Facebook event and haven’t tested its desktop environment yet, either, but it also looked promising. Whether the company will be able to drive the same level of desktop performance as Microsoft, which obviously has the advantage of controlling the underlying OS, remains to be seen.

The 2D VR Conundrum
One issue that both Microsoft and Facebook face as they push forward with their desktop environment plans is the simple to note but hard to address issue that pretty much 100% of today’s productivity apps are two dimensional. The result is that when you drop into these fancy virtual reality desktops, you’re still going to be looking at a two-dimensional windowed application. And you’re going enter and manipulate data and objects using your real-world keyboard and mouse. What we’re facing here is the mother of all chicken and eggs problems: Today there are very few virtual-reality productivity apps because nobody is working in VR, but because nobody is working in VR few app developers will focus on creating such apps.

One of the primary reasons I’ve been bullish on the long-term prospects of virtual reality (and augmented reality) is that I envision a future where these technologies enable new ways of working. Up until now, humans have largely adapted to the digital tools on offer, from learning to use a qwerty keyboard and mouse to tapping on a smartphone screen filled with icons. VR and AR offer the industry the opportunity to rethink this, to define new interface modes, to create an environment where we do a better job of adapting the tool to the human, acknowledging that one size doesn’t fit all.
Facebook and Microsoft’s continued reliance on the desktop metaphor at this early stage is both completely understandable and a little frustrating. These are the first stops on what will be a long journey. Ultimately, it will be up to us as end users to help guide the platform owners and app developers toward the future we desire. I expect it to be a very interesting ride.

News You might have missed: Week of October 13, 2017

Google Home Mini loses Touch Feature

Not even a week after the debut of Google Home Mini, Google was made aware of an issue with the pre-production units. The touch functionality on the top of the Mini which allows users to turn the Mini into listening mode was behaving incorrectly. Basically, the Mini was detecting a touch when nobody was actually touching it causing it to be listening in without the user knowing. Google first released a software update to rectify the issue and later disabled the feature altogether.

Via Google Support 

  • This was the worse possible timing for Google given the launch of the new products including the higher end Google Home Max
  • Consumers are already concerned about having microphones and cameras in their homes and while they might not think this was malicious – there is no reason to think it was – it might still raise concerns
  • Google made the right call in removing the top touch functionality. Given the fix to the initial issue was software it might have still left users uncertain
  • The removal of the functionality will not impact the device experience as most users would have used voice anyway to wake the Mini. So why put it there in the first place? Given this is a smaller device compared to the original Google Home it is plausible to see it more often on a desk or bedside table where it could be easily tapped. More cost-sensitive buyers might also be slightly more reluctant to talk to technology and initiating the exchange with a touch might be more familiar.
  • Overall I do not expect this issue to impact the adoption of the Mini.

 Another Record Breaking Quarter for Samsung

Samsung is expecting record operating profits of about 14.5 trillion Korean won ($12.8 billion) for the September quarter, seeing a massive jump of nearly 179 percent from the same period a year earlier. The South Korean tech giant also said it expected consolidated sales to be about 62 trillion Korean won in the third quarter, a touch lower than the 62.1 trillion won market forecast.

Via CNBC

 

  • Like in previous quarters it is the semiconductor business that is driving these strong profits.
  • Strong global demand for DRAM chips will continue to outpace supply in 2018, while demand for NAND flash chips exceeded supply for six straight quarters as of September 2017
  • In the display business, financial analysts are expecting profits to be negatively impacted in the third quarter by decreasing LCD panel prices as well as one-off costs. However, the display business could improve in the fourth quarter on the back of sales of OLED panels for the new iPhone X
  • On the devices side, results should be positively impacted by the successful Note 8 launch. ASP will be more visibly impacted than sales, however. Samsung announced this week that the new Gear Sport Smartwatch and the IconX earbuds will start shipping October 27 with pre-orders kick off today Oct 13.
  • Over time, the semiconductor sector should also start benefitting from new business coming from the automotive business where the Harman brand will certainly help opening doors.
  • Next week Samsung will hold their developer conference in San Francisco where I am particularly looking for news around Bixby, Viv and SmartThings.
  • Late last night, Samsung CEO Kwon Oh-hyun announced his resignation as head of the component business. He will step down in March 2018. He cited an unprecedented crisis that calls for new leadership. While no more details were offered, it is quite obvious that Kwon Oh-hyun was referring to the corruption charges that lead to a jail sentence for Lee-Jae-yong. Considering the current performance of the division a change in leadership now will hopefully not cause too much of a distraction.

RIP Windows Phone

Over the weekend Windows Phone’s lead Man Joe Belfiore tweeted “We have tried VERY HARD to incent app devs. Paid money. wrote apps 4 them.. but volume of users is too low for most companies to invest. ” he also confirmed that of course Microsoft will continue to support the platform with bug fixes, security updates, and the likes but there will be no new hardware.

 Via Arstechnica

  • This was a long time coming. You heard me many times over the past two years advocating that Microsoft should just pull the plug on poor Windows Phone and move on. I know there are many fans out there and some were still hoping for a Surface phone to come to the rescue but alas hardware was not the answer.
  • Like Joe Belfiore explains, Windows Phone suffered from the fact that the app ecosystem never took off making the platform less appealing to higher-end users.
  • Microsoft’s change in strategy from owning the smartphones to reaching out to users on other platforms via apps and services is paying off. Just look at how many people use Office on an iPad.
  • The biggest limitation I see for Microsoft not owning a smartphone platform is for Cortana. While Cortana is able to get onto iPhones and Android phones it is hard for her to really shine as it is not the default assistant. Because of this, I wish Microsoft did not give up on the wearable market as that could have mitigated the lack of phones.

Big Questions for VR Still Remain

Yesterday, Facebook announced a VR headset called Oculus Go that will start at $199. This device will certainly be a less than a full-featured version of the Oculus and it seems much closer to a Gear VR experience than an HTC Vive or Oculus device that is tethered to a PC and provides much better graphics. However, at this price, the VR category will be tested to see if it can truly go mainstream.

There are still so many variables and big questions lingering for VR. Samsung has had minimal success with the Gear VR with somewhere between 5-7 million unit sales total according to most estimates I’ve seen. That is a 99 dollar product that requires a >$700 smartphone. This may not be the right compare, but we certainly know at the current price of $429 for an Oculus Rift that they sold less than 300,000 units. While Samsung’s Gear VR requires a high-end Galaxy smartphone, they have a much stronger brand than Facebook/Oculus which is a key driver in their volume.

The challenge with Virtual Reality is it is still novel. Even with pure gamers, we do not see a swell of sales, and I’m not sure if this has to do more with price or with the content library available. According to recent survey data from eMarketer, less than 10% of PlayStation 4 console owners say they have a PS VR headset. So even within console gaming, VR has yet to penetrate the broader customer base.

We can look at this data and argue that, perhaps, dedicated VR experiences are only and always going to appeal to a small market. There is certainly a good argument to be made, but this is where the merging of virtual and augmented reality into the same device comes into play. Mixed Reality, as Microsoft likes to call it, may very well be the sweet spot. There are hardware challenges that need to be worked out for this to work, but I have a hunch the market will be more receptive to headsets that offer full VR experiences and full AR experiences in the same hardware.

This is why I find Facebook’s relentless focus on VR somewhat interesting versus most the others in the market who are focusing more on the mixed reality experience. This is truly a question of which one of these the market will adopt first and which is the right strategy to focus on first. Facebook certainly has augmented reality experiences they can easily integrate into the Oculus headsets when they want, but they are choosing to focus heavily on pure VR experience first for better or worse.

Consumer Adoption
There is also a question of consumer adoption to deal with regarding VR. This applies to AR also since both AR and VR are relatively new experiences. This is why a key part of this analysis is to understand how consumers will start to get exposure to VR and understand its value more broadly than just a few use cases. This could be where the entertainment industry comes into play and one company, in particular, could play an interesting role in making VR go mainstream. That company is Disney.

Yesterday, Disney announced a new VR experience coming to Disney parks. This is not just any VR experience either; it is one centered around Star Wars. The technology enabling this is also not tethered to a PC but a full-featured head-mounted display and instead of being tethered to a PC you have a small backpack that powers the VR head display. At the moment Disney plans to have these in certain locations and will charge separately for the experience. You can see a day where eventually Disney will add more of these experiences inside their parks or as a part of some new ride/attractions.

In this context, it is interesting to see how companies like Disney could help mainstream innovations in technology and give consumers a quality first experience with an innovation that could help make adoption go faster.

Still a Ways Off
I still believe we are quite a ways off for VR to go mainstream. The technology still has a long way to go. Even though advancements in packing sophisticated computing power, sensors, display technology, etc., is getting better every year we are still a long way from where we need to be. Hopefully, what is happening in the market is a simultaneous development of market demand as consumers experience VR at places like Disneyland, theatres, friends and families houses, etc., and when the technology is ready, we can then see if the market responds.

Samsung, Microsoft and their partner ecosystem, Amazon, Facebook, Snapchat, Google, and Apple all have their sights on this market in one way or another. And while the industry sometimes thinks new entrants can break into this market the reality is one, or a few of the above, will be the ones who capitalize and VR/AR/Mixed Reality when the market is ready.

Google and the Disintermediation of Search

This week, the growing amounts Google pays phone makers and other companies to carry its search engine have been in the news as financial analysts have expressed concern over margin pressure. The growth in those traffic acquisition costs is certainly worth watching, but I’d argue that by far the larger strategic threat to Google comes from the growing disintermediation of search, something that’s also been in the news this week.

Google’s Growing Traffic Acquisition Costs

There’s no doubt that Google’s traffic acquisition costs have been growing, not only in absolute terms but as a percentage of revenue. By far the biggest driver of that increase is the increasing cut Google has to pay to Apple, Samsung, and others who give the Google search engine prime placement in their browsers. The chart below shows the percentage of revenue from Google’s own sites which it has paid out in TAC to these partners:

As you can see, that number has risen in various phases, notably from 2011-2013, and again starting in 2015 and continuing through the first half of this year. Overall, the percentage has nearly doubled from 6% to 12% during this eight-year period, and the trajectory continues to be dramatically up and to the right. That reflects the fact that an increasing proportion of Google’s search traffic and revenue now comes through smartphones and especially the iPhone, which likely constitutes a big chunk of its overall TAC payouts.

Disintermediation May Be the Bigger Issue

However, all of this only affects the search revenue Google actually generates and the margins it can drive off the back of that. Certainly, if TAC continues to rise in this way, that should squeeze margins, but the threat of disintermediation could undermine the revenue base on which those margins are generated in the first place. What do I mean by disintermediation here? The fact that many of what would once have been Google searches are now pre-empted by other apps and services before the user ever reaches Google. Here are just a few longer-term examples:

  • Apps: whereas users once used Google as a starting point to reach a variety of websites, they’re far more likely today to visit smartphone apps associated with those sites. To the extent that there’s any searching going on, it likely takes place within the narrower confines of those apps or perhaps an on-device search engine.
  • E-commerce: for online retail specifically, past studies have shown that some 55% of searches now originate not on Google.com but on Amazon.com, again cutting Google and its search and ad revenues out of the picture (and in the process allowing Amazon to quietly build its own search advertising business.
  • Voice: people are increasingly using voice interfaces to search for information they once used a text search for, both on mobile devices and increasingly on smart voice speakers like Amazon’s Echo and Google’s own Home products. In many of these cases, even on Google’s platform, there’s currently no ad revenue opportunity associated with that.
  • Bots: Facebook and Microsoft have now both announced integration of AI-based virtual assistants into their messaging platforms, with Microsoft finally launching Cortana in Skype this week after trailing it at last year’s Build conference. These bots will increasingly pre-empt searches because they give users the information they need when they need it in proactive ways.
  • Contextual information: even if AI-powered bots aren’t serving up this information in a messaging context, there are a variety of other ways in which information previously provided reactively is now being provided to users proactively. Snapchat’s addition of Context Cards this week is the latest example of this, offering up restaurant reviews and ride sharing services in the context of Snaps with location tags.

Google clearly recognizes all of this, which is why it’s been one of the biggest proponents of progressive web apps and other approaches which try to reassert the pre-eminence of the web, though it hasn’t had much success with that approach against the continued growth of native apps. But it’s also clearly aware that it may as well try to play in secondary roles where it can, which explains its recent reappearance as the back-end of Apple’s Siri search functions in iOS and macOS, which likely resulted from a bigger financial incentive, which in turn will drive up traffic acquisition costs further. But such concessions are going to be increasingly necessary if Google is to maintain its search and ad revenue growth in the face of these multi-faceted threats.

The DUPING of America

By now I am sure you are all aware of the incredible story of Google, Facebook, and Twitter taking ads from Russian trolls with the intent to influence America’s last election. In essence, Russia has declared Cyberwar against America, and our social overlords at these three companies were asleep at the switch when this was done right in front of their faces. Even worse, it looks now like some Facebook ad salespeople even guided some of these trolls on how best to reach their intended targets with these “fake” ads.

Every day we awake to new news coming out about the depth of what Russia has taken to try and create a new type of civil war between not the North and South but the left and right. They have figured out that most American’s are easily duped along their partisan way of thinking and that during their election if a story helped give credence to their way of thinking, they embraced it as truth without even questioning its authenticity.

I admit that when it comes to reading any stories, my background in debate makes me question any story I read and try and check its authenticity. I got pretty high in State Debate tourneys and was actually good at debating both sides of the issue regardless if I believed the premise of the debate. But if you have ever debated any subject, especially at the high school or college level as part of your education, you know that facts are extremely important, and lies and conjecture could easily cause you to lose the argument when you were in front of debate judges.

But I am finding that not many people who read social media are willing to take the time and check out a story for its authenticity and instead are much more likely to take what they see as “gospel” especially if it underscores their opinions and beliefs. This is something that Russia knows all to well and has their “cyberwar” machine in high gear. Their goal is to try and divide us as much as possible and in the process sow seeds of destruction not only to our way of life, which they actually covet but our government that they, in their wildest dreams, would like to overthrow.

What worries me about this is that I don’t think that most people in the US understand the gravity of this situation. America’s Forefathers fought for the right of us to determine our leaders and have the freedom to vote our conscience. An outside force like Russia highjacking this the very heart of the US constitution should infuriate people. On my case, I want to be the last person ever to be duped by a foreign government whose attention is out in the open and yet many embraces these “fake” stories

Of course, outside sources trying to influence what our people think and how they vote is not new. The British tried desperately to reach the settlers and colonists in the “New World” to try and keep them in the British Kingdom. Some of the pamphlets they distributed were as much fake news then as recent fake news is today. I but have read how the Nazi’s tried to get their message to the US in the early 1930’s and lest we forget, some British Royalty and even some US leaders were Hitler sympathizers. Certainly “fake” news was deployed here too to cover the real vision Hitler had to wipe out all but an Arian race.

But there is one very big difference on how the Fake News of the past was spread to influence how people thought. In 1772-1776 it was done through pamphlets and very crude news gazettes. Before WW II it was done through newspaper and radio. But in both of these instances, the messages were very broad and not personalized. But this time around this level of fake news is distributed by a social network medium that is highly personal and can include very targeted ads or fake stories. As we see with the current reports from very credible sources, A foreign entity-Russia-has taken direct aim at trying to destroy America’s values, the way of life and pit bother against bother to sway the results in their favor.

Whether Facebook, Google or Twitter like it or not, they now have a major responsibility to be very clear about what “fake ads” were bought by which foreign entities and contrast them to the facts that are known so that the American Public can see how they have been duped. Just as importantly, any social network that helped spread fake stories needs to own up to this in a big way and start putting key measures in place to screen their ads in the future and make sure this does not happen again.

Of course, this gets right to the heart of each of these company’s business models, which is almost exclusively based on ad revenue. A recent Guardian story laid out Mark Zuckerberg and Facebook’s predicament given their business model and why they are under such serious scrutiny by US Govt. officials now. Taking Fake Ads may be profitable but not very responsible and that has to be corrected. For Zuckerberg and Facebook, along with Google and Twitter, ads are at the heart of these distribution mediums, and they need to get serious about educating people about what was said in these fake ads and identify who bought them in detail. Then put in safeguards to make sure this does not happen in the future.

“Fortunately, the top Democrat on the House Intelligence Committee, Adam Schiff, D-Burbank, has called for the ads to be released publicly. Schiff has been working with Facebook to find a way to make that happen.The American people deserve to see the ways that the Russian intelligence services manipulated and took advantage of online platforms to stoke and amplify social and political tensions,” Schiff said.”

These ads need to be released soon and with it very clear descriptions of who was behind them and where possible, tie their motives to the ad’s message. This needs to be done soon so that by the time our next election happens in 2018, this does not happen again.

Our Concerns About Smart Tech might reflect our Lack of Trust in Humanity

Back in January, toy maker Mattel announced Aristotle, a smart hub aimed at children. Aristotle was designed to grow with your kids, starting out with helping soothe a crying baby with lights and music to get to help them with homework once in school.

Last week, Mattel canceled the product saying it did not “ fully align with Mattel’s new technology strategy.” Despite the company statement, it seems that strong concerns around data privacy and child development led to the product cancellation.

While a product directly marketed at kids might raise more concerns, there are plenty of products that are hitting the smart home market that will be used by children and should be no less of a concern.

Focus on Kids is growing in the Quest to win Our Homes

Over the past few weeks, both Amazon and Google refined their Amazon Echo and Google Home’s Kids offering by adding specific apps and features. Amazon released a series of apps that will be labelled as children’s apps from Spongebob Challenge to Storytime. In a similar vein, Google announced some new kid-friendly features bringing story time and gaming together with Disney’s names like Lightning McQueen and Star Wars. Google is also working with Warner Brothers and Sports Illustrated Kids to add content regularly. The new kids vertical will soon be open to developers and these features will be arriving on Google Home later in October. Parents will also be able to have family-linked account settings on Google Home so that different permissions can be set through the Family Link service. Last, a new feature for families called “Broadcast” allows you to push voice messages and reminders across all of your Google Home devices. Although I doubt Broadcast will make my kid pay more attention than when I shout “time for school” at the top of my voice! Google also improved Google Home’s ability to understand kids younger than 13.

All of these features are collecting data at some level or another which might expose brands to risks, risks Mattel did not think were worth running. Doing things by the book, Amazon is requesting parents to give permission as requested by the Children’s Online Privacy Protection Act but only time will tell if it is enough. This is, after all, uncharted territory.

For technology shared in the home, things get complicated quite quickly. There are different areas we should consider: content access and data privacy. While there is a lot of attention on the latter, I am not sure we have started to think about the former as much as we should. These devices are full-fledge computers disguised as remote controls which might lead most users to underestimate the power they have. With computers, and to a lesser extent, with phones, we set up restrictions for what our kids can do. With TVs, however, we mostly tend to rely on a mix of program guidance and common sense in order to regulate the type of content our kids are exposed to. These new  smart devices are nowhere near self regulation. Once technology is capable to recognize voices we might be able to grant certain permissions so that our children will not be played the wrong version of a song, or read an R rated definition of a word. Nothing is perfect though. Consider your TV experience, you might be ok with the movie you allow your children to watch despite the PG13 rating, but the commercials are often not appropriate. When it comes to these smart speakers, everything from contacts to search is wide open to users. Google started supporting two voices under its Voice Match technology so that when I access calendar appointments or contacts it makes sure they are mine and not my husband’s. As the number of voices supported grows you can see how families could use the technology not to just prevent kids from calling people and messing with your calendar but also to avoid access to certain content.

Technology is not Evil but Humans can be

Protecting privacy, especially of the vulnerable, is a very important topic. Yet, what I find even more fascinating are the concerns some expressed about the impact that Aristotle could have had on children’s development. The worry was that kids could learn to turn to technology for comfort rather than their parents or care takers. But how can that happen if we, as parents and caretakers, continue to do our job?

Technology impacts behavior. Nobody could successfully dispute this statement.
We already know Gen Z is growing up more slowly than previous generations. Like Jean M. Twenge says in the book iGen “ today’s kids are growing up less rebellious, more tolerant, less happy and completely unprepared for adulthood”. While technology is the enabler, it is humans who empower it. This addiction to screens starts very early out of convenience. I often tell the story that our daughter’s first words were dada, “ipone” and mama. Yes, ladies and gentlemen, I came after the magical device Steve Jobs brought to market! How did that happen? Cause we discovered it was the most effective tool to keep our wiggly baby still during nappy changes. Convenience drives a lot of what we do. Before phones it was TV of course. Parents discovered that it was much easier to put kids in front of a screen to be entertained than to actually engage with them. Yet, I am sure that even the busiest of parents would not just let a smart hub soothe their crying baby.

Why are we so concerned about the impact on child development? How is this new tech any different from the effect that pacifiers could have had? While we might not see a pacifier as tech, it is a device, a device that was invented to substitute a mother’s breast to soothe. Have mothers stopped breastfeeding or caring for their babies because of it? Certainly not!

Smart tech is helping with preventing crib and hot car deaths but as far as I know there is no tech that can either supplement or substitute common sense. It seems to me that the concerns people have are more rooted in a lack of trust in us humans and how we will use the technology rather than in a lack of trust in what the technology can deliver. Interestingly, what will help is not better tech skills but better social skills, greater empathy, higher emotional IQ. So, as we balance the impact of tech on child development on one side and a greater focus on STEM on the other, let’s not forget to empower our kids with emotional and social skills that will help them be tech wizards with a heart.

 

My Month with Apple Watch Series 3

I’ve spent the last month with the Apple Watch Series 3, and during that time a few key observations have stood out. One of the big value propositions of the Series 3 is the cellular connectivity, and that is the part I was most interested to try and see how not having to have Apple Watch tethered to my iPhone changed the overall experience.

I tried just about every use case I could think of from the obvious ones of sports and fitness to running errands, going out to dinner, I even spent an entire Saturday with my iPhone turned off and relied only on Apple Watch as my source of connectivity to the Internet. I love the promise, but many of the software experiences are not yet evolved enough to support long periods of time on Apple Watch only.

Now, we can debate if that is ever a goal, but to be honest, I think it should be whether or not anyone believes someone will spend half a day or longer on just Apple Watch and without their iPhone. I completely understand the current perspective that Apple Watch is not a replacement for your iPhone, and it is not, however, there are times where I want certain experiences to stand on their own and not require the iPhone. Email is a good example. I still wanted to check and read email, perhaps reply short responses with Siri, or at least be kept up to date on my email. But Apple’s email app continually tells me there are things in the email that can’t be read on Apple Watch, and I can read the full email by looking at it on my iPhone. When you don’t have your iPhone with you, this is a frustrating message.

This was one area where the experience was inconsistent as some emails could be read and responded to on Apple Watch and some could not. My point is here, is what being iPhone-less caused was an expectation that apps, and certain app experiences, should work independently of the iPhone. This may simply be an area where app developers have not yet caught up.

The app developer challenge with Apple Watch is probably the clearest problem I see facing the platform. A few weeks ago, I wrote how connectivity on the Apple Watch would change the product from accessory to platform, but developers have not yet caught onto this shift. Part of the problem is the small installed base, and lack of third-party app demand from consumers. However, as more consumers go out into the world without their iPhones, they will start to want apps to stand on their own and provide more full-featured experiences.

This was the one thing that surprised me when I left the house without my iPhone. I found myself wanting to use many of the apps I regularly use on my iPhone even if just for a brief check-in, or to make sure I didn’t miss anything. Things like Twitter, Slack, Apple’s Notes app, Overcast (my podcast app), and a few others. Most of these apps still assumed a tether to the iPhone as a part of their experience and didn’t deliver on the standalone experience I was hoping. Given the iPhone tether on all previous versions of Apple Watch, these developers truncated their app experience in some capacity and that becomes abundantly clear once I started leaving my iPhone behind and going out into the world.

Apple’s Messages app, and Maps app, stocks, calendar, etc., all delivered on what I expected as a full-featured and stand-alone experience on the Apple Watch, but the paradigm shift in software is necessary for this platform to live up to its potential. I think my desire to use more third-party apps, as I do on my iPhone, while Apple Watch only was one key observation that stood out to me. If that is the case, then as more consumers move to an Apple Watch with cellular connectivity, then developers are going to need to catch up and embrace the Apple Watch as a true platform now that stands independent of the iPhone.

Lastly, Siri is a standout feature of Apple Watch now. Not just because it has a voice and speak back to me but because Siri now is faster and works much better with previous versions. While being out in the world with only Apple Watch, I relied almost exclusively on Siri for text input. Largely dictation as a primary input. But apps that support Siri were helpful here letting me use Siri as an interface to their app. But here again some experiences fell short. Like when I’d ask Siri to see if I have any new emails, she would say she can help me on my iPhone and a button appears to that says “open on iPhone.”

There is a great deal of potential, from what I experienced, around Apple Watch Series 3 and the possibility of less dependence on iPhone. But the reality is that once you leave the house without your iPhone, your expecations around using the Apple Watch change. It is here that, unfortunetly, Apple and third party developers have more work to do to meet the needs of customers.

Edge Computing Could Weaken the Cloud

Ask anyone on the business side of the tech industry about the most important development they’ve witnessed over the last decade or so and they’ll invariably say the cloud. After all, it’s the continuously connected, intelligently managed, and nearly unlimited computing capabilities of the cloud that have enabled everything from consumer services like Netflix, to business applications like Salesforce, to social media platforms like Facebook, to online commerce giants like Amazon, to radically transform our business and personal lives. Plus, more than just the centralized storage and computing capabilities for which it’s best known, cloud computing models have also led to radical changes in how software applications are designed, built, managed, monetized and delivered. In short, the cloud has changed nearly everything in tech.

In that light, suggesting that something as powerful and omnipresent as the cloud could start to weaken may border on the naïve. And yet, there are growing signs—perhaps some “fog” on the cloud horizon?—which suggest that’s exactly what’s starting to happen. To be clear, cloud computing, and all the advancements its driven in products, services and processes, isn’t going away, but I do believe we’re starting to see a shift in some areas from the cloud and towards the concept of edge computing.

In edge computing, certain tasks are done closer to the edge or end of the network on client devices, gateways, connected sensors, and other IoT (Internet of Things) gadgets, rather than on the large servers and other infrastructure elements that make up the cloud. From autonomous cars, to connected machines, to new devices like the Intel Movidius VPU (visual processing unit)-powered Google Clips smart camera, we’re seeing an enormous range of new edge computing clients start to hit the market.

While many of these devices are very different in terms of their capabilities, function and purpose, there are several characteristics that unite them. First, most of these devices are designed to take in, analyze, and react to real-time data from the environment around them. Leveraging a range of connected sensors, these edge devices ingest everything from location and temperature data to sound and images (and much more), and then compute an appropriate response, whether that be to slow a car down, provide a preventative maintenance warning, or take a picture when everyone in view is smiling.

The second shared characteristic involves the manner with which this real-time data is analyzed. While many of the edge computing devices have traditional computing components, such as CPUs or ARM-based microcontrollers, they all also have new and different types of processing components—from GPUs, to FPGAs (field programmable gate arrays), to DSPs (digital signal processors), to neural net accelerators, and beyond. In addition, many of these applications use machine learning or artificial intelligence algorithms to analyze the results. It turns out that this hybrid combination of traditional and “newer” types of computing is the most efficient mechanism for performing the new kinds of calculations these applications require.

The third unifying characteristic of edge computing devices gets to the heart of why these kinds of applications are being built independent from or migrated (either partially or completely) from the cloud. They all require the kind of real-time performance, limited latency, and/or security and privacy guarantees that best come from on-device computing. Even with the promise of tremendous increases in broadband network speed and reductions in latency that 5G should bring, it’s never going to replace the kind of immediate response that an autonomous car is going to need when it “sees” and has to respond to an obstacle in front of it. Similarly, if we ever want our interactions with personal-assisted powered devices (ie., those using Alexa, Google Assistant, etc.) to move beyond one question requests and into naturally-flowing, multi-part conversations, some amount of intelligence and capability is going to have to be built into edge devices.

Beyond some of the technical requirements driving growth in edge computing, there are also some larger trends at work. With the tremendously fast growth of the cloud, the pendulum of computing provenance had swung towards the side of centralized resources, much like the early era of mainframe-driven computing. With edge computing, we’re starting to see a new evolution of the client-server era that appeared after mainframes. As with that transition, the move to more distributed computing models doesn’t imply the end of centralized computing elements, but rather a broadening of possible applications. The truth is, edge computing is really about driving a hybrid computing model that combines aspects of the cloud with client-side computing to enable new kinds of applications that either aren’t well-suited or are not possible with a cloud-only approach.

Exactly what some of these new edge applications turn out to be remains to be seen, but it’s clear that we’re at the dawn of an exciting new age for computing and tech in general. Importantly, it’s an era that’s going to drive the growth of new types of products and services, as well as shift the nexus of power amongst tech industry leaders. For those companies that can adapt to the new realities that edge computing models will start to drive over the next several years, it will be exciting times. But for those that can’t—even if they seem nearly invincible today—the potential for becoming a footmark in history could end up being surprisingly real.

Deception on the Internet is Nothing New and it’s Getting Worse

We’re just digesting and analyzing the impact to the nation of being exposed to untruthful news stories. (Note: I’m following Dan Gillmor’s advice and not using Fake News because that term has been hijacked by Donald Trump to refer to news he disagrees with.) And while this may be the most severe example of being misled by the Internet, it’s certainly not the only. In fact, the Internet is filled with cases whose sole purpose is to trick and deceive us under the guise of offering useful information.

One pervasive example is when searching for ratings on various products. There’s a vast number of sites that purport to provide objective analyses and ratings of products. The sites are titled with names such as www.top10antivirussoftware.com but are often sites created to tout one product over another or to just provide a list of products with links to buy, in exchange for referral fees.

A search for “Best iPhone cables” finds one top choice (paid for position), “BestReviews.Guide,” a site that reviews numerous products. There’s no explanation of how they rate, but in their disclaimer, they write, “BestReviews. The guide provides information for general information purposes and does not recommend particular products or services.”

But pseudo-reviews are not confined to mysterious companies. Business Insider offers reviews called “Insider Picks.” Many of these reviews are filled with words but do little to explain the basis for their ratings.

What’s motivating all of these review sites? The opportunity to monetize them by receiving kickbacks or referral fees when someone clicks to buy, primarily from Amazon. You can examine the link that takes you to Amazon to see the code added to the normal link. Commission range up to 10% with an average of about 5%.
And here’s another example of deception and trickery on the Web. I experienced a problem with QuickBooks on my Mac and looked for a phone number to get help. There was no phone number in the app, so I searched online. Up came an 800 number, using Google’s search and a Website titled “QuickBooks 800 Help Line”. I called it, got a seemingly helpful technician, and he readily identified the cause of my problem. He said he needed to install the QuickBooks utility software on my computer to remove some bad files. As I started to allow this, I hesitated and asked if there is was any charge. He said there is a $300 charge for the utility.

That’s when I checked with my daughter, using a 2nd phone line, who, coincidentally, was an Intuit manager. She confirmed after a quick call to the head of customer support that I was not speaking to Intuit, but an imposter. I quickly hung up and later discussed this with an executive at Intuit. Their policy, like many companies, had been to hide their customer service number because they were not equipped to handle the volume of calls. She said they never anticipated what I experienced and, perhaps, as a result, their phone number pops up at the top of a search.

I was reminded of this the other day when I was doing a story on Google’s customer support, which is a major consideration when buying their new phones. Searching for a support number brought up many sites purporting to be Google support, but no Google number. One prominent site is “Gmailtech.info” with the headline “Unlimited Gmail support” and a phone number, and this paragraph:

“Phone Support-one can reach the Google Technical Support service by dialing their customer service number which is completely free of cost and our customer care is available 24/7*35 days. You just need to call on the Google Support Phone Number, and you will get all the solutions to your problems.”

Of course, it takes you to a GTech number. And notice the poor grammar.

So, these misleading support sites are still rampant, taking advantage of those looking for help and information.

This is probably not a revelation to most of us in the tech community that once laughed about the Nigerian scams, but like deceptive news stories, the players are getting more sophisticated at deception.

Context For Netflix’s Price Increase

The price increases Netflix announced this week come around a year after it finished implementing its last set of price increases, a process it spread over several years. Those last price increases occurred at a time when Netflix’s margins were already expanding despite its growing content spending, but this time around, the increase follows pressure on margins from ongoing growth in content spending.

Netflix’s Highly Predictable US Streaming Margins

Netflix is one of the few companies I know of which has set specific long-term margin targets and then made consistent progress towards attaining them. It set a 40% margin target for its US streaming business some time ago, with the intention of hitting that mark in 2020, and for a long time its progress towards that goal was almost linear. Recently, there’s been a little more volatility in that number, but it’s essentially still on track and even ahead of target:

The volatility in the last couple of quarters was due to the timing of new series launches, which were delayed a little this year from the first to the second quarter and therefore moved some costs around a little. But you’ll note from the chart above that the last several quarters have all been within spitting distance of 40%, while Q1 actually exceeded it.

The last set of price increases certainly contributed to that margin progress, hitting in 2014 and 2015 for new customers but in 2016 for existing customers and thereby pushing average revenue per US paid streaming subscriber per month up from $8 to $10, even as cost of revenue per subscriber was falling from $5.30 to $5.00 or so per month. However, since then, cost of revenue per subscriber has actually begun rising, albeit not dramatically, going up around 23 cents per month on an annualized basis over the past year. That, in turn, threatens to slow the progress towards the 40% margin goal and even reverse it, hence the price increases just announced.

Lessons Learned From the Last Increase

As I mentioned above, Netflix chose to stagger the introduction of the last set of price increases, “grandfathering” existing subscribers at the earlier price until prices went up by $2 in 2016, while two single-dollar price increases hit the price for new subscribers in 2014 and 2015. The thinking here was presumably to spread the impact of higher churn from those unhappy about the price increases over a larger number of quarters rather than taking the impact all at once.

In practice, though, the customers that churned did so starting when the price increases for existing customers were announced, earlier in 2016, rather than when most of the increases actually hit later in 2016, as the chart below shows (2016’s numbers are shown in light blue):

As such, Netflix seems to have learned its lesson this time around and has decided to implement the price increases all at once, with the price rising for new customers immediately and for existing customers starting in November. That means it’ll take the whole hit in once quarter – Q4 2017 – rather than over several quarters. Given that Q4 is normally the company’s second strongest for subscriber growth after Q1, that’s probably smart timing, as it’ll likely still manage positive growth overall even with several hundred thousand fewer net adds.

Increasing 4K Impact

One interesting wrinkle in the new price increases is that they affect each of the three tiers of service Netflix offers on the streaming side differently. There’s no price increase for the basic, standard definition, single-stream service; there’s a one dollar price increase for the most popular HD, 2-stream service; and a two-dollar bump in the price for the Ultra HD, 4-stream service. Given that the revenue per US streaming user has historically tracked very closely with the middle tier’s pricing, it’s very likely that this is by far the most popular service, and that the numbers on the UHD and SD plans are small and largely cancel each other out.

However, as more and more people buy not only 4K TVs but also streaming boxes from the major players, all of which have recently been updated with better 4K support, that could start to change. We could therefore see higher uptake of the UHD service at $14 start to push average revenue per user above the $11 we’d see if the past pattern continued. That will make the choice to raise that price by two dollars instead of just one perhaps the most consequential change of all, and one which will likely accelerate the progress towards 40% margins and beyond.