The End of Standardized Platforms

Historians of the technology industry observed a pattern that was predictable with regards to new computing platforms. In the early days of a computing segment, like mainframes, minis, and desktops, there was a great deal of platform fragmentation. These early computing systems often ran proprietary software and operating systems with little interoperability. Eventually, a standard emerged — Windows. Even though Macs stuck around, their market share stayed well below 3% for much of the build-out of the PC era. Here is a visual to show how platforms started fragmented and then standardized around Windows.

Screen Shot 2016-02-12 at 5.30.49 PM

What you are looking at is operating system share of devices sold annually. Of computing devices sold for most of the time period above, Microsoft was the standard. Fast forward to today and we see a slightly different picture emerge. Windows runs on PCs and Android and iOS run on mobile devices, giving us three primary operating systems occupying the bulk of computing devices sold each year.

Screen Shot 2016-02-12 at 5.31.40 PM

You can see as the pie went from several hundred million computing devices sold each year to now almost 2 billion computing devices sold annually (when you add up PCs, tablets, and smartphones), the pie has gotten much larger but the landscape has also changed. While Android has the largest chunk of the pie, they do not have the 97% share Microsoft once had. The size of the pie and the global diversity of the consumer market brought with it the opportunity for several computing platforms to exist simultaneously.

If we take a step back and look at the installed base, we see an even clearer picture of the diversity in computing platforms in use today as well as the size of the market.

Screen Shot 2016-02-12 at 4.50.01 PM

There are now well over three billion active computing devices in the world with five primary operating systems/computing platforms — Windows, OS X, iOS, Android, and AOSP (non-Google) Android running in China. The key point here is, with the addition of scale in mobile and the inclusion of the global consumer market, there is no single standard computing platform. The question then is, what will happen with things like Virtual/Augmented reality platforms or artificial intelligence platforms? Should we expect VR/AR or Artificial Intelligence to unify and one single platform emerge like during the enterprise PC days or will many different platforms exist as we see today in consumer computing?

I tend to lean toward the latter. While VR/AR will start off segmented with Oculus having a platform, Sony having a platform, Microsoft having a platform, Google having a platform and even Apple having a platform eventually, it may also stay segmented rather than consolidated.

The global consumer smartphone market has shown us it can sustain many platforms so perhaps whatever comes next will follow the same paradigm. As I’m observing with wearables, where the market is actually developing into a rich segmentation, perhaps VR/AR or artificial intelligence will do the same, adding new layers of computing platforms onto the existing ones rather than consolidating into a single one.

The Devices Formerly Known as Smartphones

The Barcelona-based Mobile World Congress trade show has served as the location for major smartphone announcements for a long time, so it’s no surprise to see that happening again this year.

Splashy introductions have been made by Samsung, LG, Lenovo and other usual suspects. But there is an important twist for 2016. It stems from the transformation of smartphone-sized devices that has been going on for several years now. In essence, the question boils down to this: when is a smartphone no longer (or not primarily) a smart ”phone”?

For many younger people, arguably that’s been the case for quite some time. We know they essentially use their phones as mobile computing devices and very rarely use the traditional smartphone features. In fact, in a survey of over 1,000 US consumers done last fall by TECHnalysis Research, voice calling only represents 5.8% of the 18-24-year-old segment’s total smartphone usage time. Even with older consumers in the 45-54 age group, voice calling and texting together only account for just over ¼ of a typical user’s smartphone time. The rest is spent on more computing-device type activities, such as browsing the web, listening to music, gaming, reading email, social media, etc.

Alongside these consumer trends, we’ve seen tremendous changes in work habits. For example, in that same survey, over half of employed respondents said they used a personal phone for work tasks during a typical week, spending an average of 2.3 hours on those efforts. While a good portion of this is likely for email, there’s no question a large amount of time is spent doing work-related, computing-style tasks on our personal smartphones. Throw in the large number of employer-provided smartphones in active use where—theoretically, at least—most of the time spent is on work tasks, and the total hours of computing done on smartphones becomes enormous. Plus, this is just for the US, where PC penetration is quite high. In many developing regions, smartphones are essentially the only computing device many people own or have access to. As a result, smartphone-based computing on a global basis is now on a staggering scale.

Given this context, thinking of a smartphone as more of a traditional computing device than just a communications tool seems incredibly obvious. But for many traditional applications, there is that one thing — screen size.

Now, as someone who finds reading glasses to be an increasingly necessary accessory, I’ll admit I don’t have the razor sharp eyes of my youth. I also acknowledge that it never ceases to amaze me how much today’s young people can do on the 5-5.5”-sized screens the smartphone industry has coalesced around. Still, there is a limit that most people face when it comes to what they can achieve on these smaller screens, particularly when a fair amount of input is required.

That’s why I’m intrigued by HP’s new Elite X3. At first glance, the 6″, Qualcomm Snapdragon 820-powered device looks to be just another smartphone—a Windows 10 Mobile-based one, at that. But, in conjunction with some of the hardware accessories the company specifically developed to be used alongside it, along with the capabilities of Windows 10 Mobile’s Continuum features, the X3 can morph into a full-on, big-screen computing device.

Now, cynics will argue we’ve seen this before. Anyone remember the Motorola Atrix? Or how about Microsoft’s own Lumia 950 from last fall? Both notable but ultimately failed efforts to develop a smartphone form factor computer. The difference with the X3, however, is the focus and detailed vision. On the Atrix and Lumia 950, the computing features were add-ons to an existing smartphone. The X3 seems to be positioned and designed primarily as a computer, with the smartphone capabilities essentially built in.

[pullquote]On the Atrix and Lumia 950, the computing features were add-ons to an existing smartphone. The HP X3 seems to be positioned and designed primarily as a computer, with the smartphone capabilities essentially built in.”[/pullquote]

Admittedly, that may sound like semantics and, of course, whether the final execution lives up to the promise remains to be seen. However, a quick glance at some of the details suggests HP has thought things through pretty well. First, the hardware accessories—particularly the clamshell form factor Mobile Extender, with its 12.5” HD screen, three USB Type-C, micro HDMI and audio ports—add a whole new level of connectivity and input options to the phone-based computing experience. You connect the X3 to the Mobile Extender via one of the USB Type-C ports—where you’ll get the added benefit of being able to power and recharge the X3 through the Mobile Extender’s built-in battery—but HP will enable also wireless connections, though that may come after the product launches.

On the software side, because it’s Windows 10 Mobile-based, the full Microsoft Office suite is built-in. As an ARM-based device, however, there is the potential for compatibility problems with existing Windows apps (other than newer universal Windows 10 apps, which can run natively on Windows 10 Mobile ARM devices, but those applications are still very limited in number). To avoid the Windows RT-like incompatibility stigma, HP is working to provide a virtualization-based solution that will allow traditional x86-based apps to run on the X3—a huge boon for most potential users.

Even with all these efforts, it’s not clear to me a device like the X3 will become most people’s only, or even primary, computing device. Nevertheless, in a world where people are looking for more flexible computing options, and are accustomed to working across multiple devices, the X3 concept seems to be well timed.

Mobile World Congress also saw the debut of some smartphone form factor computing devices from Panasonic. The company’s new ToughPad FZ-F1 and FZ-N1 (Windows 10 IOT Mobile Enterprise and Android-based, respectively) are ruggedized, have a 4.7” screen and are Qualcomm Snapdragon 801-equipped handheld computers with integrated barcode scanners. At first glance, they look like ruggedized smartphones with a large protrusion (for the barcode reader) but, interestingly, the company will actually be selling a version that supports WiFi only (and can do voice via VOIP), in addition to an LTE-equipped option. Though clearly not designed to be a general purpose computing device, like the HP X3, these Panasonic FZ devices exemplify how hardware companies are evolving smartphone form factors to meet unique mobile computing needs.

To be sure, the “traditional” smartphone will continue to be the dominant opportunity for these 5” screen-based devices for some time. But as the category matures and dramatic new technology innovations for them continue to slow, it’s clear we’re entering an era where smartphones, as we know them now, will likely cease to be.

Can You Put A Face to Big Data?

One of the most popular buzzwords in tech is something called Big Data. However, trying to get a straight answer as to what Big Data is can be difficult. In fact, as I looked into this deeper, I found at least 20 different definitions of what people believe Big Data means.

The fundamental concept of Big Data is that all types of computing devices — computers, smartphones, cars, fitness trackers, bar code scanners and even your TV and other IoT devices — are creating data and, in most cases, sending that data to the cloud.

Once it is in the cloud, data is stored and collated. Using various analytical tools, companies or individuals can mine that data to get answers to all types of questions or learn important things through statistical analysis. Using Big Data, one could study people’s habits and look at global health information in search of patterns that could help create new drugs or treatments in the medical world. It can be is used to find the latest fields for drilling oil and, in one example that impacts all of us, it gives advertisers a glimpse into what people are thinking and what they want in order to create better targeted ads for their clients or customers.

However on the surface, Big Data is all about numbers and number crunching. If looked at in these terms, Big Data seems cold, calculating and highly impersonal. A friend of mine, Rick Smolan, who is considered one of the great photographers of our time, looked at this idea of Big Data about five years ago and wondered not only what it means, but how it looked in terms of people creating and using it in real life. The result of this quest was a coffee table book entitled “The Human Face of Big Data”. Rick and a team of photographers, researchers and bloggers went around the world to photograph people using technology and put a face to this idea of Bid Data.

Ted Anthony of the Associated Press defines what this book is about:

“…an enormous volume… that chronicles, through a splash of photos and eye-opening essays and graphics, the rise of the information society…. a curious, wonderful beast — a solid slab that captures a virtual universe… This is one of those rare animals that captures its era in the most distinct of ways. It’s the kind of thing you’d put in a time capsule for your children today to show them, long after you’re gone, what the world was like at the beginning of their lives.”

When Rick sent me a copy of the book three years ago, it was a real eye opener for me and I suspect for anyone who reads it since it demystified the idea of Big Data and put a human face to it.

I recently found out Rick and his team were not content with covering this topic in book form alone. Last week, I was invited to the West coast premiere of a new movie on this topic directed by his brother Sandy Smolan and executive produced by Rick.

The movie is called, “The Human Face of Big Data-The Promise and Perils of Growing A Planetary Nervous System”. The hour long documentary premieres nationally on PBS Wednesday, February 24, 2016, at 10:00 p.m. ET (check local listings.)

Narrated by actor Joel McHale, the award-winning film features compelling human stories, captivating visuals and in-depth interviews with dozens of pioneering scientists, entrepreneurs, futurists and experts to illustrate powerful new data-driven tools, which have the potential to address some of humanity’s biggest challenges, including health, hunger, pollution, security and disaster response.

Some interesting tidbits from the pre-movie briefing I had as well as from the movie itself:

“The average person today processes more data in a single day than a person in the 1500’s did in an entire lifetime” Mick Greenwood

“Big Data is truly revolutionary because it fundamentally changes mankind’s relationship with information.” Michael S. Malone

“We’ve reached a tipping point in history: today more data is being manufactured by machines-servers, cell phones, GPS-enabled cars-than by people” Esther Dyson

“From the Dawn of civilization until 2003, humankind generated five exabytes of data. Now we produce five exabytes every two days. And the pace is accelerating.” Eric Schmidt

“As we begin to distribute trillions of connected sensors around the planet virtually every animate and inanimate object on earth will be generating and transmitting data, including our homes, cars, our natural and made environment and yes, even our bodies.” Anthony D Williams

This documentary looks at how people all over the world are using technology and, in turn creating, collecting and communicating data that, in most cases, goes to the cloud and can be used for all types of purposes. The movie itself is very positive about the potential impact of Big Data on us but it is also very realistic and shows the dark side as well, since it can be used by hackers and criminals against people and mankind.

I see this documentary enlightening to anyone that watches it since it succeeds well in defining Big Data — what it is and, more importantly, how it can and will impact mankind in the future. It also makes the concept of Big Data more personal as it puts a face on it and makes us realize that, while data itself in computing code is just numbers, it is people who create that data and are really at the heart of it.

Opening Pandora’s iPhone

According to Greek legend, Pandora, the first woman on Earth, was given a box that she was instructed never to open. Curiosity overcame her, however, and when she lifted the lid, all the evils of the world flew out. ((Men are always blaming women for all of their troubles. I think history has shown that men don’t need any assistance in creating trouble. We’re really, really good at creating trouble all on our own.)) ~ Endangered Phrases, Steven D. Price

On Tuesday, February 16, 2016, a judge at the United States District Court of California issued an order compelling Apple to assist the FBI in decrypting a phone used by one of the shooters involved in the San Bernardino shootings. Apple has balked at the request.

The key is finding that backdoor that can be used appropriately by law enforcement with the appropriate judicial oversight. Search warrants and appropriate court involvement,” Stickrath said. ~ Steubenville rape trial also hindered by iPhone encryption, NBC4i.com

No. That is not the key. That is not the key at all.

There are no fourth amendment issues here. No one is objecting to the police searching the phone with proper judicial oversight.

Hypothetical

The San Bernardino shootings were bad enough, but let’s take this to its logical extreme. Suppose the FBI thought there was information in a suspect’s home that might help them PREVENT an imminent terrorist attack involving a tactical nuclear weapon.

Yikes! That’s about as bad as it gets, but plausible, no?

I’m absolutely convinced that the threat we face now, the idea of a terrorist in the middle of one of our cities with a nuclear weapon, is very real and that we have to use extraordinary measures to deal with it. ~ Dick Cheney ((The Military Quotation Book by James Charlton))

The FBI, having gone through all the proper procedures, goes to the suspect’s home to search for evidence. Only, there is a problem. The home is impenetrable, has only one door, and that door can only be opened with the homeowner’s password. And the homeowner is dead

The FBI goes to the company that built the home and installed the door and asks them for their assistance in opening the door. Perfectly reasonable request. The homebuilder would have to be some kind of monster ((Or Apple?)) to turn down such a request.

One of the great mistakes is to judge policies and programs by their intentions rather than their results. ~ Milton Friedman

Here’s the thing. First, this homebuilder has installed the same type of lock on every one of the 1 billion (and counting) homes they have constructed. Just to put that in perspective, there are around 7 billion people on the planet.

Second, if the homebuilder creates a passkey for this home, the key would work on the doors of all the other 1 billion homes too.

And of course, we’re not really talking about 1 billion homes. If the FBI asks this homebuilder for a master key, they’re going to, soon enough, ask all the other homebuilders for their master keys too, right? Effectively, a master key to almost every home, almost everywhere, will be in the hands of the FBI.

Trust

No problem, right? The key will be safe and secure in the possession of the FBI, right?

Right?

The truth is that all men having power ought to be mistrusted. ~ James Madison

Well, it’s possible that, every now and again, the FBI bends the rules just a bit. But they only do so to get the bad guys, right? And we’re one of the good guys, right? We have no reason to fear the FBI having a passkey to our homes. We’ll never give them legal cause to use it, right?

Giving an encryption key and the power to use it to the government is like giving car keys and whisky to teenage boys. ~ paraphrasing P.J. O’Rourke ((Giving money and power to government is like giving whiskey and car keys to teenage boys. ~ P.J. O’Rourke))

Even if we assume the FBI is 100% trustworthy 100% of the time ((That’s a mighty big “if”. I can trust my dog to guard my life, but I can’t trust him to guard my food. Similarly, I can trust law enforcement to guard my life, but I can’t trust them to guard my privacy.)) , we’ve still got at least three big problems.

Problems

1) Once the key is in the FBI’s possession, the FBI computers can be hacked and the key stolen.

2) Once the key is made, the integrity of the encryption will have been compromised and other clever people will be able to copy or create a duplicate of the key too.

3) If the FBI can order a key made, so can every other governmental body. From New York to New Zealand, from Chinatown to China, from South Africa to North Korea — everywhere the builder builds, they will have to provide the governing authority with a master key.

CbqPx0bWIAEC5vO
Source: Privacy Camp

History

If you want to see the future, look to the past.

— If you don’t believe the police will unlawfully use the key, then I encourage you to study the history of the fourth amendment
— If you don’t believe the key can be duplicated, then I encourage you to study the history of encryption
— If you don’t believe that government computers can be hacked, then I encourage you to study the history of computing
— If you don’t believe the key will be abused, then I encourage you to study the history of humankind

There is nothing new in the world except the history you do not know. ~ Harry S. Truman

Dilemma

So, do we allow a horrendous crime to occur we could have prevented? Or do we catch the scumbag and prevent the crime at the cost of subjecting our homes (actually, our smartphones) to a search by anyone powerful enough to demand, or clever enough to copy, the master key?

The answer can’t be “both”. It’s either/or. One or the other. You can’t have your encryption and eat it too. ((You can’t have your cake and eat it (too) is a popular English idiomatic proverb or figure of speech. The proverb literally means “you cannot both retain your cake and eat it”. Once the cake is eaten, it is gone. It can be used to say that one cannot or should not try to have two incompatible things. The proverb’s meaning is similar to the phrases “you can’t have it both ways” and “you can’t have the best of both worlds.” ~ Wikipedia))

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety ~ Benjamin Franklin ((From the Quote Verifier, by Ralph Keys: “So many quotations are misattributed to Benjamin Franklin that it’s refreshing to consider something Franklin actually said but for which he rarely gets credit. His actual words, in the Pennsylvania Assembly in 1755, were “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” Twenty years later, in 1775, Franklin wrote in a political critique, “They who can give up essential liberty to obtain a little temporary Safety, deserve neither Liberty nor Safety.” This thought of Franklin’s is sometimes credited to Jefferson.”))

I know where I stand. Where do you stand?

The boisterous sea of liberty is never without a wave. ~ Thomas Jefferson

Author’s Plea: I know it’s asking a lot, but let’s try to keep the political rhetoric out of the comments. The issue is divisive enough without it.

“It is the certainty that they possess the truth that makes men cruel.” ~ Anatole France

Let’s just take it as a given our political opposites are all mindless idiots and move on from there.

“Truth springs from argument amongst friends.” ~ David Hume

Podcast: Apple/FBI Controversy, VR Cautionary Note, Web Music Services

This week Tim Bajarin, Bob O’Donnell and Jan Dawson discuss the controversy around the FBI’s request for Apple to unlock a terrorist’s iPhone, describe some concerns around the usage of virtual reality technologies, and debate the opportunities and challenges for web-based music services.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Why Amazon is More Impressive Than Apple These Days

Over the past year, Amazon’s stock has risen by about 50%, while Apple’s has fallen by 24%. This is a sign that some of Amazon’s longer term investments are starting to pay off, while Apple is struggling a bit to create the new categories for growth a company its size needs in order to maintain what had been an incredibly long and consistent winning streak. Apple and Amazon are both impressive companies, doing impressive things, delivering great products and services and, for the most part, delighting their customers. But I think amidst all the hoopla and near-halo effect surrounding Apple, it makes sense to step back and consider what Amazon is accomplishing.

I’ll admit that, on a personal level, I have a bit of a love-hate relationship with Amazon. I’m dismayed at the effect Amazon has had on brick and mortar stores, and the physical shopping experience in general, in the same way that WalMart, while impressive in its own right, has taken a serious whack out of Main Street. But as a customer, industry observer, and consultant, I am awed at how many good bets Amazon has made and at its consistently high level of execution. Let’s look at some areas:

Amazon Prime. Yes, the b-school case studies have already been written about Prime. And about how it might have been a loss leader in its first couple of years. Free two day shipping might be the gateway drug, but Prime cements the customer relationship in the fiercely competitive world of digital commerce. Just when you think Prime might not be ‘worth it’ because you didn’t buy a lot of stuff last year or use Amazon’s streaming music service, Amazon throws another goodie into the Prime pot, nearly guaranteeing that auto-renewal for numerous products and services. iTunes was once that baby. Now, it’s Prime.

Amazon Web Services. In an already crowded world of cloud and web services, Amazon was brilliant in spotting a vacuum for small and medium-sized businesses who felt ignored by the Silicon Valley heavyweights and wanted to grow modularly. AWS is the IT on-ramp for companies in the way that, in its heyday, AOL was the onramp for consumer internet and Apple was for smartphones.

But it’s fascinating to see how Amazon is continuing to shape and evolve AWS, adding services and capabilities in a way that almost anticipates the needs of its customers (i.e. big data, analytics, IoT) and expand the unit’s scope. For example, five days ago, Amazon acquired Italy-based NICE, a SaaS vendor of software and services for high-performance and technical computing. On Tuesday, Amazon launched Lumberyard, a new 3D game engine that enables developers to build and operate cloud-based games with the help of AWS.

Market Segmentation. This is an area where Amazon’s strengths are under-recognized. For example, with AWS, Amazon smartly focused on the needs of small and medium-sized business. Kindle is another great example. Rather than try to replicate the iPad as others in the Android sphere have done, Amazon has developed numerous versions of the Kindle that address a particular (and I would argue under-served) segment of the market: the Paperwhite for readers and a few SKUs of the Fire oriented toward content/media consumption.

The Kindle Fire for Kids shows how Amazon thinks through the needs of a particular segment – in this case, both the kids and the parents. For example, there’s an attractive subscription pricing option, a bevy of curated content appropriate for kids and focused on reading and education (addressing a segment of parents who are concerned about this issue), a solid parental controls capability, and a no questions asked two-year replacement policy. I doubt Amazon is making much of a profit on the device itself or even the content but I’m confident they’re monetizing this in indirect ways and over the long haul.

Television. For all the ink spilled about Apple TV and the fawning over its grand plans to reinvent the model, Amazon TV outflanks Apple TV in just about every respect. Ultra HD, a better bundle of channels and OTT content for those who want to cut the cord, integration with Alexa, who in my mind, is better than Siri, and a good package of content for Prime subscribers. Plus, they have invested heavily in original content, with more hits than misses. As well, the user experience with Amazon TV compared to the Apple TV is smoother. It just works and is not buggy. And you can download content, which is a huge advantage over other streaming services.

Software Experience. This is an area where Apple has lost ground. iTunes feels bloated and outdated. Its productivity applications, from Mail to Calendar to Cloud, are not best-of-breed. Now, Amazon is not in all these areas. It picks and chooses its spots. But I find that, in many cases, Amazon has put function over form. Their stuff just works.

Innovation. I’ll admit that I was skeptical of the Echo. But this product has proven to be a sleeper hit. Why? Well, it’s an attractive piece of hardware that does a lot of little things really well and that, on a collective basis, proves to be surprisingly useful and a bit magical. Sort of what we thought about the iPhone and the iPad when they were first introduced. And with Alexa, they’ve taken voice recognition to a new level. With the Echo, I believe Amazon has, so far, outflanked Apple’s HomeKit in the smart home race.

I also believe Amazon panders a little less to the investment community than Apple does. Apple is under relentless pressure to meet what have become unrealistic expectations, which I believe has caused the company to release products or features that are either not fully thought through or not quite ready for market. Don’t get me wrong here – the iPhone is still the best phone, Apple’s computers are still the best computers, and their ecosystem is still the best ecosystem.

And Amazon is by no means perfect. They have had some major product failures, such as the Fire Phone (though they cut their losses pretty quickly). They have had working conditions in the U.S. akin to those Apple has gotten pummeled for in China. The New York Times piece last year on what it’s like to work at Amazon, even if it were only half true, is disturbing. The company does not always play well with others. And Amazon is pretty obtuse about discussing the performance of its individual parts. But you can see in the company a confidence and a connection with customers. This might seem a trite example, but you sense it in their TV commercials, in a way that you did in Apple’s, circa 2011-14. It’s an interesting time to reflect on what this founding member of the dot-com era has pulled off over the better part of 20 years.

Apple’s Principled Stand

On Tuesday evening, a magistrate judge at a United States District Court in California issued an order compelling Apple to assist FBI agents in breaking into the phone used by one of the suspected shooters involved in the San Bernardino shootings in December 2015. Apple has formally objected to the order, explaining in a public letter to customers over Tim Cook’s signature why it feels this would be a dangerous step. Reactions to the situation have been somewhat predictable, with those on both sides adopting familiar positions. In reality, the situation is fairly nuanced, and that nuance is largely being missed.

Apple’s stance on encryption is clear

The current case is certainly not the first glimpse we’ve had into Apple’s stance on privacy, security, or even technical issues such as encryption. Since taking over as Apple CEO, Tim Cook has made privacy and security major elements of Apple’s positioning and differentiation and he’s hammered these themes repeatedly, including in a previous letter to customers on privacy specifically. On encryption, Tim Cook has been one of the most vocal and strident opponents to the idea that governments should have backdoors to bypass encryption and gain access to devices. The reason for that stance, in turn, is clear: giving one entity a backdoor potentially gives any entity similar access, should the tools involved fall into the wrong hands. It also sets a precedent in which Apple might feel obligated to provide any government around the world the same tools which it provides to one government or to begin to pick and choose which governments and jurisdictions’ requests it will honor, which is itself a slippery slope.

This case isn’t about encryption per se

What’s tricky is this case isn’t about encryption per se. Tim Cook seeks to tie the FBI’s request to the broader issue of encryption by painting both with the “government backdoor” brush. However, this case is actually about brute forcing a passcode and not about encryption itself. As others have written, this wouldn’t even be possible on newer devices which include the Touch ID sensor and the associated Secure Enclave and the encryption protections that go with them. But this case concerns an iPhone 5C which doesn’t have those elements. However, what ties encryption and this case together is that, in both cases, governments want Apple to create software that lets law enforcement circumvent security protections on these devices, hence the “backdoor” phrase. It’s arguably nitpicking to debate whether the backdoor is permanently left open or whether law enforcement needs Apple to unlock it every time it’s used.

An order for access to a specific device

The FBI has, however, asked specifically for Apple to assist it in accessing a single device, rather than to provide a blanket backdoor. Both the Bureau and the White House have suggested this negates Apple’s claims that this approach would be applicable to any device at any time. The order even provides for Apple to keep the device in question on its premises while it loads the software and offers remote access to the FBI’s investigators for the purpose of brute forcing the password. In a technical sense, this would appear to make it impossible for the FBI to take the software to hack into this one device and apply it to others, at least without a new warrant.

An issue of precedents

However, the biggest single problem with what Apple is being asked to do in this case is the precedent it sets, both from a strict legal perspective and otherwise. From a legal perspective, once Apple is compelled to provide the FBI with the means to access information on this one device, the precedent will permit it to be compelled to do so again. That applies not just to the technical specifics of this case, but the legal structure under which Apple is being compelled to assist – i.e. creating new software (malware, effectively) which can bypass security protections built into a device. Although this order involves a one-off, after-the-fact solution, it also creates the risk Apple might be compelled to design its standard software in such a way as to make this possible or easier on other devices going forward. Hence, this is the beginning of a slippery slope that could easily lead to just the kind of outcomes Apple is trying to avoid with encryption, even though this case is technically about something else.

Unappealing test case

One of the biggest challenges with this particular case is the specifics make it very unappealing for any other tech company to jump to Apple’s defense. In a case where a reporter was protecting a whistleblower, for example, it might be far easier to garner support from the public for defending her right to privacy and security. I’ve seen quite a few people suggest today the FBI (which favors encryption backdoors) has likely chosen this case as a precedent setter precisely because it’s so hard to argue for the rights of the subject in the case. Although big tech companies have made some supportive comments about encryption over the past year, including a joint letter to President Obama last June, none of them have yet forcefully come to Apple’s defense in this particular case. I suspect that’s a reflection both of their weaker commitment to the general cause and their queasiness about engaging with this specific case.

A principled stand

The fact this case is so unappealing is precisely what makes Apple’s stand a principled one. A stand based solely on the optics of a particular case wouldn’t be worth much at all, but a stand on such a politically charged case shows just how serious Apple is about this issue. Cook makes clear in the letter that Apple shares the government’s aims in bringing terrorists to justice, so this is entirely about the means and not the ends. And Apple’s stance is not just about encryption, but about the inherent privacy and security of Apple’s devices. Apple’s argument is that ordinary people want devices that come with the kind of privacy and security guarantee Apple offers baked in, not because they have any nefarious intent, but simply because they want to protect their private and sensitive information. Tim Cook has argued that terrorists and criminals who want to keep their information out of the hands of law enforcement will always find ways to do so. That argument is backed up by a recent Harvard study on the easy availability of encrypted communication solutions.

The courts, and the court of public opinion

This whole issue is about a court order and ultimately judges will determine whether Apple has to comply with the order as it currently stands. As such, no amount of lobbying or public statements by Apple or others is likely to sway the outcome, which will depend on individual judges’ interpretations of the facts of the case combined with the applicable laws (though I’ve no doubt Apple appreciates the support of the EFF and others who have promised to file amicus briefs). Arguably, therefore, it matters little whether other tech companies jump in on Apple’s side because they likely can’t affect the outcome.

But this case will almost certainly bring to the forefront a debate about the broader issues involved, which is what Apple has wanted all along. Once that happens, I would hope other tech companies will indeed weigh in on the issue and do so far more vigorously than they have so far. The biggest challenge is that this debate will take place in a public sphere in which discussions over complex matters are almost always over-simplified. Already, we have presidential candidates and congressmen weighing in on the issue on both sides, pandering to their bases without any real understanding of the intricacies or the broader implications. Although Apple has wanted a legislative solution all along, it now risks being dragged into a very public battle in which Exhibit A will be this court case about a terrorist’s iPhone, which may make it much tougher to win.

How the Smartphone is Redefining Dating Norms

Many of us covet that classic love story: meet unexpectedly, fall madly in love, age gracefully together.

But the average age of marriage is creeping up. And the when, where and how we’re meeting our spouse-to-be is changing too, driven by the digital age we’ve been thrown into.

Today’s dating environment is more diffuse and more competitive than ever, as dating apps compete for our attention and affection, all the while gathering and analyzing our information. It is fundamentally redefining the dating norms we’ve known for the past half century. But is the data driving us to make the right romantic decisions?

Digitizing the matchmaking process makes us more reliant on data than ever. Before Match.com launched in 1995, chemistry — with an assist from serendipity — was the primary driver of matchmaking throughout most of modern Western culture.

The first generation of dating apps put the onus of finding a match squarely on the user: scroll through pages of profiles, scanning photos and examining other sundry details. Today’s dating apps rely on GPS, algorithms and, increasingly, how you use the service to define compatibility, make a match and motivate a first date.

Tinder, one of the most well-known and heavily used dating apps today, has 50 million users in 196 countries and produces 26 million “matches” a day. In November, Tinder released a new algorithm that incorporates both technical and informational data points.

Digital dating platforms provide the illusion of having unlimited choice, challenging traditional dating norms. Today’s dating app users are accustomed to having multiple, simultaneous digital conversations. This dating behavior would be nearly impossible to do in public but is incredibly common in spaces enabled by digital communications.

Perhaps as a way to fight the illusion of unlimited choice and capitalize on dating data, some dating apps like Hinge and Coffee Meets Bagel are limiting the number of recommended matches they provide.

Today, about five percent of Americans in a marriage or committed relationship met online and 15 percent of Americans have used online dating sites or mobile dating apps. And the rapid rate of growth in digital dating suggests this figure is poised to increase. However, like love itself, digital dating isn’t all rainbows and butterflies.

Roughly one-third of online dating service users have never actually gone on a date with someone they’ve met online. Many users seem to be only marginally connected and committed, making it harder to find the signal through the noise.

It is much easier to like someone in the digital universe of matchmatching because it is equally easy to stop liking them. Online dating cycles are much shorter than analog courtships. In almost all instances, you simply click “unmatch” and you can be disconnected from them entirely because the social norms that exist in the physical world do not apply.

First impressions have been replaced by digital images, which have become incredibly important elements of the digitization and redefining of dating norms, thanks in large part to the proliferation of, and the ease of use of, smartphones with cameras, filters and photo editing software.

Dating apps allow you to share multiple photos with would-be matches. Like a peacock spreading its feathers to attract a mate, we do the same with a collage of photos. But in the digital realm, it’s subtly different: We get to choose (and digitally enhance) the feathers we portray. We pull from a million photos until we have the perfect array and then use these photos as a sort of dating “resume”.

In almost all instances, these types of photos tell us a lot more about what the person is looking for in a match than about themselves. Before we’ve even said hello, we know more than any opening conversation could have provided historically.

The full ramifications of the new digitally defined era of dating are still coming to fruition.

Some studies suggest couples who meet online are three times more likely to divorce. Only time will tell if statistics like these hold as the popularity of the medium grows.

While there have always been unspoken dating norms, they are being defined (and often redefined) by smartphone apps and internet sites. Because the rules are fixed within the parameters of the software, what were once loosely understood norms are becoming strictly enforced parameters.

In a highly competitive environment, apps are finding new rules to implement in order to differentiate themselves and, consequently, are redefining dating norms as a result.

Digitization continues to bring us numerous new markets, and in the process redefine some, like matchmaking, that are as old as time.

Can Web Music Survive?

While no one would argue the Internet’s incredibly positive impact on so many aspects of our lives, it is interesting to see that long-held assumptions about it don’t always ring true.

Take, for instance, the common notion that the web is the ideal distribution platform for all kinds of goods and services, particularly digital media. There’s an entire segment of the world’s economy, in fact, which is arguably based on that hypothesis.

But several decades into the internet revolution, there seem to be several glaring cases where web-centric businesses based on these assumptions aren’t really living up to their potential—at all.

One of the most obvious examples is music. For so many reasons and in so many ways, the distribution of music digitally via the web seems like a match made in heaven. Music plays an important role in most people’s lives—an extremely important role for some consumers—so there’s strong built-in demand, and the small size of digitized music files makes them seemingly easy to transfer via the enormous range of routes over which we now have access to the internet.

And yet, here we are in 2016 with more news about the struggles of online music businesses than success stories to share. Market leaders Spotify and Pandora continue to lose money, as do players such as SoundCloud, and smaller services like Rdio and now MixRadio close on a frequent basis. Even the very biggest names in tech—including Apple and Google—have struggled to find a lasting, profitable business model for their large investments in digital music.

For a long time, of course, Apple had great success with iTunes. So much so, in fact, that they changed the nature of the music business. Unfortunately, that success also brought with it an entirely new, and more dour perspective from the traditional music owners—large music labels—that’s making new business ventures in music significantly more challenging.

Equally important, tastes in digital music consumption evolved from buying and downloading songs to streaming them. Consumers have become captivated by the option of getting access to an enormous range of musical choices, particularly in conjunction with the unique music discovery and social sharing capabilities that these services offer.

But streaming services don’t seem to be the ultimate solution either. Most are ad-based and struggle with converting free customers to paid ones. In addition, there’s growing resentment in the music industry about the royalty payments made to musicians from these services. In fact, at this week’s Grammy awards, there was an impassioned plea from the music industry about the inequity of receiving tiny fraction-of-a-cent payouts for streaming music.[pullquote]While people acknowledge that there’s value in content, paying for that content alone doesn’t seem to be a viable way of doing business long-term. .[/pullquote]

The problem is, despite these concerns about payouts to the music industry, online music companies still have to invest significant money in order to get access to new music. Perhaps to no one’s surprise, the real issue seems to be in how that money is being distributed.

The other challenge is one that seems to be similar to many other web-based media properties. While people acknowledge that there’s value in content, paying for that content alone doesn’t seem to be a viable way of doing business long-term. In the case of music sites, because they can’t seem to make money selling the music itself, they’re hoping to do so selling tickets to concerts, as well as artist’s t-shirts and other promotional items. It’s not likely to lead to gangbuster profits, but this more indirect model may at least lead to businesses that can survive.

Longer term, however, there’s going to have to be some serious soul-searching and re-examination of long-held assumptions about internet business models, because they’re clearly not all spun from the gold of which many believe the web is made.

Podcast: Twitter, Wireless Connectivity, Sony VR

This week Bob O’Donnell, Tim Bajarin and Jan Dawson discuss the recent earnings from Twitter, describe some of the new wireless connectivity enhancements for WiFi and LTE, and debate the potential for consumer virtual reality products like Sony’s forthcoming Playstation VR offering.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Tapping the Brakes On the Virtual Reality Hype Machine

This week in Los Angeles, I attended the first ever Vision Summit, a gathering of individuals and organizations working in and on Virtual Reality (VR) and Augmented Reality (AR). The opening keynote featured executives from key players such as Oculus, Sony, Google, NASA, and Valve. Unity Technologies hosted the event and, during the keynote, the company’s CEO, John Riccitiello said something remarkable. He said there is too much hype around AR/VR today. He said unrealistic expectations threaten the enormous long term potential he sees for the technologies and the market.

I couldn’t agree more.

Riccitiello went on to cite a January 2015 forecast that showed a VR hardware installed base of nearly 40M units by the end of 2016. I’m not going to cast stones regarding that forecast as it’s more than a year old. Also, as anyone who has attempted to predict a market where devices haven’t started shipping yet knows, it’s a messy business of slipped launched dates and broken assumptions. Suffice to say, this number is simply too high.

The problem with such outsized projections for a nascent market is that, when actual shipments fail to reach that total, some will suggest the market isn’t living up to its potential or it’s merely a fad. Riccitiello called the difference between such a forecast and then the eventual reality the “gap of disappointment.” He went on to say he thinks VR growth will take longer, but ultimately, the market will be bigger than analysts are currently predicting.

I’m not as convinced about that last point yet. But after two days of deep dives on the topic of VR, I do have some key takeaways that will help drive my future forecast. They include:

Screen-Less VR Viewers Will Drive Early Volumes

There was a fair amount of hand-wringing by tech pundits when Oculus announced the shipping price of its first consumer Rift product of $600 at CES. People complained the price was too high. The company still presold all of the units it made available but the reaction was telling regarding the willingness of mainstream users to spend that amount of money for VR. Plus, you need a PC with high-end graphics and plenty of computing power to get the best experience. For mainstream users, screen-less viewers such as Samsung’s $100 Gear VR are the obvious first step. As Tim Bajarin noted in a column late last year, the experience on the Gear VR is pretty good. It is also a clear step up from even more basic experiences such as Google Cardboard. In 2016, expect to see a range of devices with similar capabilities to Gear VR, for a wider range of phones and at lower prices points. I expect the Chinese smartphone vendors to embrace this category with gusto.

Tethered Head-Mounted-Displays (HMDs) Will Have a Slower Ramp

As noted, in 2016 we’ll see Oculus ship as well as HTC’s Vive and Sony’s PlayStation VR. I call these products tethered HMDs (Head Mounted Displays). Like Oculus, Vive will require a high-end PC; Sony’s product will need a Playstation 4. At the event, Sony executives pointed out it has already shipped 36M PlayStation 4s. This installed base, along with the plug-and-play nature of the product, will give Sony’s product a distinct advantage out of the gate. But over time, the PS4 installed base will grow at a much slower pace than that of the VR-capable PCs. The high price of all three products will necessarily limit the total available market here. A critical question going forward: How long will the tethered experience require a PC or console? In other words, how soon will we see phone-tethered options in the market and what level of experience will they drive?

Touch Interaction Is Critical; Balance is Tricky

Sitting in on several developer sessions, a key theme developed: The fact the first three major tethered HMDs all have touch-based control options with similar features is telling. Fundamentally, establishing a presence in a virtual reality requires the ability to interact with that reality in ways that feel as natural as possible. Being thrust into a reality with no hands or, at least, controllers acting as hands is severely limiting. One developer, working on a cross-platform game for the HTC, Sony, and Oculus, noted that while the touch controllers for each product look different, they all have similar basic interaction modes, which points to a certain fundamental correctness of the approach. Robust touch capabilities will be an area where screen-less viewers will consistently struggle to compete with the larger tethered rigs.

One of the more interesting sessions covered one developer’s attempts to address a fundamental challenge for achieving virtual reality’s holy grail of total immersion: Tricking the body’s sense of balance. While today’s VR can address sight and sound well, touch to a limited extent, and taste and smell not at all, what he called the “sixth sense” of balance is very hard to fool. Essentially, our vestibular system or inner ear is our gyroscope that tracks the angle of the head and body. When that angle doesn’t match what our eyes or ears are telling us, a disconnect occurs that, at best, breaks the sense of immersion and, at worst, makes you dizzy. To fix this problem, VR will require wireless HMDs combined with advanced motion capture and new motion mimicking algorithms. This problem will likely take years to address.

Finally, Content is King

The industry can talk at length about hardware advances and improvements in the software capabilities of the technology. But, if the content isn’t there, none of this will matter. Based on the awards event hosted at the Summit, interesting content already exists and much more is on the way. At the moment, much of this content is driven by independent developers and producers. It’s not hard to see why Hollywood, burned by 3D, is taking a more cautious approach to virtual reality. The simple reality is you can’t just port old content and expect a good experience. The key here, espoused by numerous speakers, is the fundamental understanding that everything, from documentaries to Hollywood movies to games to instructional videos, will require a new type of storytelling in VR.

In the end, two days at the Vision Summit left me with a significantly more evolved view of both the VR and AR market. The bottom line is VR is clearly going to ramp sooner than AR and it’s going to be driven by consumers first. The hardware winners and the exact angle of that adoption curve will become clearer in the next 12 months and market watchers should beware of outsized expectations. Further, while AR also has a bright future, in the near term many early use cases will be driven by traditional phones and tablets, while next-generation head-mounted AR displays will launch first in commercial settings where companies will absorb higher prices and technical challenges in exchange for greater productivity and other benefits.

The Danger of Over-Monetization

Monetization is one of those horrible neologisms that belongs to the modern era, especially in Silicon Valley circles. So I’ll apologize in advance I’m about to go one worse and talk about over-monetization.

In fairness, the term was coined earlier this week by Amir Efrati of The Information in a tweet, though he left out the hyphen. But regardless of the word (or words) we use to describe it, there’s a real danger among some ad-centric tech companies that they begin to overdo things in their bid to generate revenue, especially from mobile users, and that this comes back to bite them. There are two companies in particular that stand out as being at risk of this phenomenon: Google and Twitter.

Google and mobile search

In its earnings report for this past quarter, Alphabet appeared to signal it’s gaining some real momentum around mobile search advertising and analysts were heartened to see this. Certainly, revenue growth has been stronger of late and it appears it’s Google’s mobile search products that were the major driver. Given the concerns about Google’s ability to replicate its success in desktop search on mobile, that’s reassuring on the surface. But think about how Google is achieving this mobile search growth. If you’ve done a mobile search on Google recently, especially if it was for a product or a search term that could be interpreted as a product search, you were likely presented with a screen full of ads. Take this sample search I just did for “Flowers” while sitting at the San Francisco airport:

Google Flowers2

Look at that first screen I’m presented with. The first two items are clearly ads, but perhaps you might think the third is organic content. However, if you look at the second screen, you’ll see that it, too, is an ad. Below that is a map provided by Google with several more listings for local flower stores. It’s not until you scroll two full screens down you begin to see the first organic search results. Now, for some people, the ads and/or the map and local listings may be just what they’re looking for. But if what you want is what Google once provided so well – organic search results – then you’re having to work quite hard to find them. For other product searches, Google will serve up its own Google Shopping results in a similar way, again often pushing the traditional organic results down to the second or third flick of the finger.

Google’s biggest challenge in mobile is that, unlike a Facebook or Twitter, it has no stream or feed into which to insert ads. Its organic search results have always been so good, many users will click on the first one they see. As such, all its ads have to be crammed into the top part of the page, before that organic link shows up, because users generally won’t scroll down any further. On the desktop, this is less important because organic results have often still made the first screen. But, on a mobile device, it means the user has to work hard even to find the first organic result. There’s a real danger that, in attempting to cram more and more ads into the top part of the screen, Google is going to make its results less relevant and more frustrating for users. And users have alternatives: on iOS, users may well see results from Apple’s Spotlight search before they even get to Google’s results and, in some cases, they may pre-empt the Google search entirely. And for product searches, users who really want to see Amazon results may just skip the Google search in favor of the Amazon app. There’s a tipping point at which Google will go too far in pursuit of a higher ad load and end up pushing users away rather than generating ever-higher mobile ad revenues.

Twitter and ad load

Twitter’s results this week were the same mix we’ve come to expect recently: terrible user growth offset by strong growth in average revenue per user. But that ARPU growth is a direct result of a higher ad load. Twitter’s management was challenged on its earnings call by analysts seeking to understand how much room for additional growth there is here. But management dodged both parts of the question, referring to higher international ad loads rather than talking about the US, where there’s some evidence they’re reaching a ceiling, and refusing to talk about the impact on user engagement. But Twitter launched a new ad product this week which, like Google’s mobile search ads, is designed to be the first thing a user sees when loading up a new Twitter screen. It was this product which prompted both Efrati’s coinage of “Overmonetization” and a tweet of my own, which was the genesis of this post.

My concern here is that Twitter, like Google, is mortgaging the customer experience in pursuit of ever-higher ad revenues. Yes, in the short term, Twitter can keep generating higher and higher ad revenues per user by simply showing more ads. But, at some point, it will cross the line from tolerable to egregious and users will either stop using Twitter or start using third-party clients which don’t show ads. Either way, Twitter will lose the ad revenue it’s been so aggressively pursuing.

A threat not unique to these two companies

Though I’ve singled out these two companies as being at particular risk from the threat of over-monetization, it’s not actually unique to them. Facebook’s Instagram has been showing more ads lately, which was likely, in part, a driver of Facebook’s strong ARPU growth recently. But it risks alienating users who’ve objected to any ads right from the beginning. I’ve certainly noticed the increase in ad load and it’s becoming more and more bothersome. Essentially, any company with an ad-centric business model is going to be constantly tempted to turn up the volume on advertising to grow revenues, especially if user growth is slower than it would like. Television channels in the US have been steadily increasing the ad load over time as well, though lately some have backed off a little in an attempt to retain customers. Television executives have learned this lesson the hard way – there’s a point at which users start to find alternatives to sitting through your ads and none of them ends well.

Perhaps all this helps to explain why, arguably, the two most successful ad-centric online companies – Facebook and Alphabet – are investing so heavily in new initiatives that have non-ad-based business models. At Google, this includes smart home hardware, fiber broadband services, self-driving cars, and life sciences. At Facebook, it includes virtual reality gear and WhatsApp, which has firmly eschewed ad-based business models. These companies see the writing on the wall about an inevitable ceiling both to the overall ad market and their ability to tap into it and are wisely investing in new products and services which aren’t bound by that ceiling.

Crowdsourcing and Its Impact On Product Creation

In the many years of working with companies developing consumer hardware products, few developments have had such an impact on the creation process as crowdfunding. Adding a crowdfunding campaign, such as Kickstarter or Indiegogo, has now become as important and as routine as the other steps in the development cycle.

On the surface, crowdfunding seems like an easy way to raise money and avoid giving up equity. It’s also a way to get initial feedback from potential customers about your idea, a real-life marketing test before you’ve built your product. A positive response can validate a product idea and encourage more investors. A lackluster one may mean the idea doesn’t resonate and may not be as good as you thought. It reminds me of companies in the 80s that put a tiny ad for a product in Popular Science to see how many people would order it before it was even developed.

Raising money for hardware has always been a problem with venture investors, whose dislike for hardware products goes back to before the disk drive industry. So crowdfunding has become a welcome alternative. While investors may dislike hardware, millions are anticipating the next hardware gadget, and plenty are willing to fund it.

But, I’ve found the money raised is accompanied with some serious issues that significantly affect the creation process. I call them “Kickstarter dollars” and they have a lower exchange rate than normal dollars.

In the usual creative cycle, a product is announced after its design is conceived, engineered, samples built, tested and refined few times. Based on market testing and feedback from alpha and beta testers, the product usually goes through a number of changes. Its cost and schedule are the outcome of all of these activities.

The product comes to market when it’s ready, not on a predetermined date. The design process follows a logical, proven path. Cost accounting and establishing the MSRP is not done until the design is completed, the manufacturer has had a chance to build a few hundred units, and negotiations take place. There’s just no way to accurately estimate the cost of a new product before it’s designed.

But crowdfunding flips this around. Rules require the product specs, cost, and schedule are committed near the beginning of the development process, often before its features are set, staff hired, a manufacturer has been selected, and the costs have been determined. As a result, the campaign makes commitments often impossible to meet.

Developing a product is quite complex and creates a lot of pressure on the team members. Adding the element of crowdsourcing adds even more. Everything is being done under the scrutiny of the backers. While they have no visibility in day to day issues, you can feel their presence, their need for frequent updates.

When the product takes longer, costs more, and the pressure mounts, the company is more likely to make decisions based on schedule over function or quality. The product released is more likely to be flawed.

The biggest problems occur when crowdsourcing is done by those with limited experience developing a product and they create unrealistic goals. More times than not, the product will never see the light of day. At last count, about 70% of the campaigns that were funder failed to deliver the product.

The outcome for products designed by experienced design teams who are better able to estimate and plan have a much better chance for success, with or without a crowdsourcing campaign. They know that the crowdsourcing campaign is the easy part of the development cycle.

The Growing Choices in Wireless Connectivity

Sometimes in order to see the big picture, you have to start with a deep dive.

At a recent two-day workshop on connectivity hosted by modem and radio chipmaker Qualcomm, I was bombarded with technical minutiae on everything from the role of filters in the RF front end of a modern modem, to the key elements of 3GPP Release 13, and the usage of carrier aggregation-like functions in upcoming technologies that leverage unlicensed 5 GHz spectrum.

What really hit me about the discussion, though, was how many different ways are now available to wirelessly connect—and how many more are still to come. In addition to the more common forms of WiFi and LTE, there is a tremendous range of new varieties of both standards, either already in place or being developed. These additions are adapting to and adjusting for the real-world limitations that earlier iterations of these technologies still have, and will help us fill in the gaps of our current coverage. Put simply, it’s Connectivity 2.0 (or 5.0 or whatever number you choose to assign to this technology maturation process).

In these days of 4K video streaming and our seemingly insatiable thirst for wireless broadband connections, that’s important. Connectivity has become the lifeblood for our devices—as essential to them as water is to us—and the need to have faster, more consistent connections is only going to grow.[pullquote] In addition to the more common forms of WiFi and LTE, there is a tremendous range of new varieties of both standards, either already in place or being developed.”[/pullquote]

In the case of WiFi, the next standard we have to look forward to is 802.11ad. Think of it as the firehose of WiFi—it can’t deliver water very far, but within that confined area, it delivers the water fast—really fast. The 802.11ad standard uses radio waves at 60 GHz to communicate—much different than the typical 2.4 or 5 GHz used by other versions of WiFi—and by doing so it can deliver speeds as fast as most wired network connections (5 Gbps), but you’re limited to being in the same room as the router/access point that’s sending out those signals.

Though the standards committees are still finalizing details, fierce competitors Intel and Qualcomm just publicly demonstrated compatibility between their two offerings last week, ensuring we’ll see the first 802.11ad-equipped products later this year.

Another forthcoming WiFi improvement that’s a bit further out (think 1-2 years) is 802.11ax (don’t even get me started on these crazy naming conventions…), which you might want to think of as the sprinkler system of WiFi.

We’ve all been to conventions, concerts, sporting events and other large venues that, while they technically offer WiFi, don’t exactly offer a great experience to everyone. Sometimes you connect, sometimes you don’t, but the speed is never great. The goal of 802.11ax is to deliver consistent quality connections and speeds in these congested environments. as well as places like multi-unit housing complexes, shopping malls, etc.

We are also starting to see efforts to extend LTE for applications like these. Key suppliers to the telecommunications industry are making an effort to use what is called unlicensed spectrum—that is, radio bands that are not specifically purchased and used by telco carriers for their own networks—to carry broadband data and, equally important, not interfere with existing WiFi traffic. Qualcomm is working with a variety of other major players including Nokia, Ericsson and Intel, on something they’ve dubbed MulteFire, which they hope will bring LTE-like performance with WiFi-like simplicity, into the mainstream over the next few years. These companies are expected to make more announcements at the upcoming Mobile World Congress trade show in Barcelona, Spain.

Barcelona will also be the site of more news on the granddaddy of all connectivity developments—5G. Though real-world implementations probably won’t happen in the US until about 2020, many developments from test beds, to radio technologies, to infrastructure elements to applications are expected to be announced at the show. 5G is being specifically designed to handle extreme variations in waveform frequencies—from the low MHz to millimeter wave 50 GHz plus—as well as enormous ranges in power consumption, all with the hope of covering every application from low-power IOT to enormous, real-time data transfers.

Keeping track of all these new connectivity options certainly won’t be easy, and getting access to them will require buying devices that specifically support the new standards. The range of options we can look forward to is impressive, however, and will help wireless connectivity become an even more ubiquitous and reliable part of our everyday lives.

Part 2: Branding Tech Companies

On January 10, 2016, I wrote an article entitled, “Platforms — Past, Present and Future“. The comments on the article made it clear to me there was massive confusion surrounding the meaning and purpose of branding in general and value, premium and luxury branding in particular.

This is part two of a four part series on and around branding. Part 1, “Android is a Stick Shift and iOS is an Automatic Transmission”, can be found here. Part two focuses on Brands and how they are used by tech companies, in general.

If you don’t know jewelry, know the jeweler. ~ Warren Buffett

Few of us know anything about the products we’re purchasing and even fewer of us know a trusted expert to advise us. Brands are a communication tool for the seller and a shortcut to understanding the product being purchased for the buyer.

Definitions

Brand Equity
Brand equity — or simply ‘Brand’ — is the premium a company can charge for a product with a recognizable name, as compared to its generic equivalent. Companies can create a brand for their products or services by making them memorable or easily recognizable or superior in quality or reliability.

A brand’s value is merely the sum total of how much extra people will pay, or how often they choose…one brand over the alternatives. ~ Seth Godin

Commodity
The opposite of a brand is a commodity item with little or no perceived differentiation from like products.

Differentiate or die ~ Jack Trout

A commodity is merely one of many options available to the consumer. When every product is nearly the same and price is the only significant differentiator, consumers don’t look for the best brand, they look for the best price.

Remember my mantra: distinct… or extinct. ~ Tom Peters

Value Brand
A Value Brand has one or more significant advantages over its competitors — distribution, automation, location, limited availability, etc. — but the primary way in which Value Brands attract their customers is via lower prices.

There are two kinds of companies, those that work to try to charge more and those that work to charge less. We will be the second. ~ Jeff Bezos

Some well known value brands are K-Mart, Walmart Amazon and IKEA.

Let me make one thing very clear. There is absolutely nothing wrong with or inferior about a Value Brand. A Value Brand may charge less but that doesn’t make them less of a brand. Value Branding is just one of several different — and highly successful — branding strategies. The vast majority of the items we own and use every day are purchased from Value Brands.

Premium Brand
A Premium Brand is a Brand that holds a unique value in the market through design, engineering or quality. Premium goods are more expensive, i.e., they charge a “premium” because they have, and maintain, a significant advantage over competing products. Some examples of Premium Brands are Disney, American Express credit cards, and Bose speakers.

More than a 1-to-1 ratio of profit share to market share demonstrates a company’s ability to differentiate its products, provide more value than its competitors, command higher prices, charge a premium and enjoy pricing power. ~ ~ Bill Shamblin

Corollary: Business is hard because differentiation – for which you can charge a premium – is hard. ~ Ben Thompson (@monkbent)

Veblen
The extra features of a Premium Brand provide the justification for its higher prices vis-à-vis a Value Brand. On the other hand, a Luxury Brand’s price greatly exceeds the functional value of the product. Qualities common to Luxury Brands are over-engineering, scarcity, rarity or some other signal to customers that the quality or delivery of the product is well beyond normal expectations. The Luxury Brand’s extraordinary excesses provide a rationale for the buyer to pay the brand’s extraordinary prices.

We’re overpaying…but [it’s] worth it. ~ Samuel Goldwyn

People can, and do, argue endlessly about where the line between Premium and Luxury Brands should be drawn. However for our purposes, the difference between Premium and Luxury is not so consequential that we need to delve into those nuances. What is important is understanding the concept of Veblen goods.

Give us the luxuries of life, and we will dispense with its necessaries. ~ John value Motley

Veblen goods are luxury goods — such as jewelry, fashion designer handbags, and luxury cars — which are in demand precisely BECAUSE they have higher prices. The high price encourages favorable perceptions among buyers and make the goods desirable as symbols of the buyer’s high social status. Veblen goods are counter-intuitive because they run against our understanding of how the laws of supply and demand are supposed to work.

I have the simplest tastes. I am always satisfied with the best. ~ Oscar Wilde

Some Brand examples of Veblen Goods are Chanel, Louis Vuitton, BMW, and Mont Blanc. Rolex for example, has created a watch to work at up to 200 meters below sea level. Who the heck is going to be fool enough to go scuba diving while wearing a Rolex watch? However, this is exactly the kind of over-the-top quality that helps buyers justify their luxury purchases.

One way to distinguish a Premium Brand from a Veblen Brand is to project what would happen if the brand’s prices were lowered. A significant decrease in the price of a Premium Brand would likely increase sales, while simultaneously decreasing margins. However, a significant decrease in the price of a Veblen Brand would likely DECREASE sales (and decrease margins) because the lowered price would destroy the brand’s cachet.

Although Veblen products are very prominent and therefore receive lots of attention, they are also fairly rare, at least in proportion to the Premium and Value Brands in their respective categories.

Question: Why Don’t More Companies Become Premium Brands?

Most companies don’t strive to become Premium Brands because it’s entirely unnecessary. Being a Value Brand is a very legitimate and very profitable strategy. Walmart is a Value Brand and it’s one of the richest companies on the planet.

Second, most companies don’t strive to become Premium Brands because Premium is hard. And moving from Value to Premium is even harder.

Half of being smart is knowing what you’re dumb at. ~ Solomon Short

Companies that try to become Premium Brands are like kids that try to have sex in high school. They all want to do it. They all claim they’re doing it. Few of them are actually doing it. And those who are doing it are doing it badly.

Once you start in the low-end in this country it is very hard to move up. ~ Ben Bajarin on Twitter

Trying to leapfrog from one brand category into another is like trying to leapfrog a unicorn. Very tricky. Very dangerous.

Question: Why Not Sell To Both Value And Premium Customers?

As I pointed out in last week’s article (Android is a stick shift and iPhone is an automatic transmission), while you can simultaneously appeal to both Value and Expert buyers, you cannot simultaneously appeal to both Value and Premium buyers. Not that many, many companies haven’t tried.

No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. ~ New Testament, Matthew 6:24

No Brand can serve two masters either.

It’s really, really tough to make a great product if you have to serve two masters. ~ Phil Libin, Evernote CEO

The ever present temptation is to chase sales by broadening one’s product portfolio, opening up distribution or even discounting products. This can cause real long term damage to the brand.

Never purchase beauty products in a hardware store. ~ Addison Mizner

People do not want to buy beauty products in a hardware store and they don’t want to buy Premium products from a Value Brand either. Do you go to K-Mart to buy high end goods? Do you go to Tiffany’s expecting to get a bargain?

Brands are always at risk of being caught in the deadly middle. Mix your brands, mix your message and your Brand will not appeal to both Value and Premium customers — it will appeal to none.

It is not wise to violate the rules unless you know how to observe them. ~T.S. Eliot

Gucci, and Pierre Cardin are recent examples of premium/luxury Brands that overexposed their Brand. Who wants to pay extra for clothes everybody else is wearing?

Samsung is an example of a Value Brand that tried to stretch to cover Premium as well. Who wants to buy a Premium phone from a Value provider?

Sparrows that emulate peacocks are likely to break a thigh. ~ Burmese proverb

Fire, water and markets know nothing of mercy. Stray from your brand and your customers will stray from you.

Some Examples Of Tech Brands

It is a test of true theories not only to account for but to predict phenomena. ~ William Whewell

Microsoft Windows (for PCs)
The Windows near-monopoly that existed for the past two decades was an anomaly, not the norm. When everyone has to buy the same product, the lines between Value, Premium and Veblen Brands become blurred. As soon as mobile computers provided significant competition to the Windows near-monopoly, customer segmentation quickly returned.

Microsoft Windows Phone
Microsoft has had numerous problems associated with its Windows Phone, but one of the problems is the Windows Phone did not neatly fit into any one Brand category. Microsoft wanted the Windows Phone to be a Premium Brand that could compete with the iPhone but, because Windows Phone was so late to market, Microsoft initially sold the phone at discount prices in order to gain market share. The phone’s lack of identity was undoubtedly one of the reasons why it was never able to gain traction against its Value and Premium competitors.

Windows Phone’s biggest challenge: it has neither scale of Android nor premium base of iOS. ~ Jan Dawson

IKEA

IKEA is not a tech brand, but I list it here because it is a fascinating compare. IKEA is very Apple-like in design and very Value-like in Brand. It’s a unique combination and Brand that’s led IKEA to a unique level of Brand identity and corporate success. IKEA has no secrets. Anyone can copy the products they sell. Yet no one does. IKEA has been doing the same thing for 40 years but they also have no virtually no competitors. They have carved out an identity for themselves that is virtually unassailable.

Amazon Fire Phone

Amazon is an amazingly successful Value Brand. Once again, I refer you to Jeff’s Bezos’ clearly stated company philosophy:

There are two kinds of companies, those that work to try to charge more and those that work to charge less. We will be the second. ~ Jeff Bezos

You cannot have a corporate mindset that you are going to charge less than the competition and then turn around and attempt the sell Premium products. Whenever Amazon does stray into the premium sector, they usually receive a bloody nose. The latest case in point is the Amazon Fire Phone.

I greatly admire Jeff Bezos who is far, far smarter than I am, but I think Amazon would be better served if it stuck to its knitting.

Fitbit Blaze

Fitbit is a Value brand. When they recently strayed into the premium sector with the Blaze, the stock market smacked them. Investors and consumers just didn’t believe Fitbit could play as an equal with Apple in the Premium wearable arena.

Samsung Galaxy

Samsung is a Value Brand that had aspirations of becoming a Premium Brand. Or perhaps it would be more accurate to say Samsung has aspirations of becoming both a Value Brand AND a Premium Brand.

Almost exactly four years ago, Samsung’s marketing boss sat down for an interview and made a claim that seemed almost comical at the time. … People had been obsessed with Apple’s iPhone line for long enough, and Samsung was going to shift their obsession to Galaxy phones. ~ Zach Epstein, BGR

And for a while, it seemed like Samsung had pulled off their audacious goal of challenging the Apple iPhone in the Premium sector. With a massive smartphone division and tens of billions of dollars to spend on marketing, Samsung’s star seemed to be waxing while Apple’s appeared to be waning. But it was not to be.

Samsung’s smartphone growth has come grinding to a halt. And it’s not because the company’s phones aren’t as good as they once were, or because Samsung’s advertising has slowed down. In both cases, the truth is quite the opposite — the Galaxy S6 and Note 5 are two of the most impressive smartphones that have ever existed, and Samsung’s marketing budget is still 11 digits each year. It’s also certainly not because Samsung is running out of room to grow; an estimated 1.4 billion smartphones shipped in 2015.

The bottom line is this: Samsung’s best smartphones simply aren’t exciting anymore. ~ Zach Epstein, BGR

By trying to sell to both the Value buyers and the Premium buyers, Samsung fell into the deadly middle. Apple stole away Samsung’s Premium customers from above while Xiomi and others undercut Samsung’s Value proposition from below.

Google Android

Android is a Value Brand because, despite its many significant features, the feature that most distinguishes Android and its associated products and services from those of its competitors is lower price.

People think I’m insulting Google when I call them a Value Brand.

First, being a Value brand is not an insult.

Second, Google Android not only is a Value Brand, Google WANTS Android to be a Value Brand and NEEDS Android to be a Value Brand. Android’s purpose is to extend Google’s reach — to have Android on billions and billions of phones. The more people use Android, the more information Google has access to.

Google Android’s entire business model is based on value. They give away the software for free, which allows manufacturers to sell their phones cheaper, which allows more buyers to buy smartphones, which puts Google services in more pockets everywhere. It’s a brilliant business model that has succeeded brilliantly. To claim Android is, or should aspire to be, anything other than a Value Brand is to not understand Google’s purpose in creating Android.

To be fair, Google has tried and tried and tried to go up market with the Nexus phone and, while the press has often been all agog over them, the buying market has all but ignored them.

While industry writers like to talk about how Google has “80 percent” share with Android, the actual units of Google-branded devices that compete with Apple are quite negligible (0.1 percent, according to IDC), despite the huge share of media attention provided to it. The number of people buying Nexus phones is less than even Windows Phone, and you’d be hard pressed to find any reasonable person who actually believes that Microsoft materially competes with Apple in the smartphone market. ~ Daniel Eran Dilger, AppleInsider

At best, the Nexus is the equivalent of a concept car. At worst, it’s a sign of misguided strategic vision.

RIM Blackberry
In its day, Blackberry was definitely a premium product. It was a best-in-class emailing machine. Geeky, yes. But very powerful.

We think of the BlackBerry device as the greatest communication device on the planet, one which enables you –a push environment, a reliable device. It’s the platform that enables this. ~ Anthony Payne, Director of Platform Marketing, Research In Motion, 13 May 2011

The above was absolutely true in 2006, but it was absolutely untrue when the above was written in 2011. By then, the iPhone had supplanted Blackberry in the Premium smartphone category.

What happened to Blackberry was a technology paradigm shift. The iPhone was as different from the Blackberry as the the steamship was different from the sailing ship. The Blackberry was a premium “sailing ship” but, in the long run, it couldn’t even begin to compete with the Value, more less the Premium, smartphone steamships that followed it.

Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road. ~ Stewart Brand

Disney

Disney is a Premium Brand. I include Disney because I see a lot of parallels between Disney’s Brand and Apple’s Brand. Disney holds a very tiny percentage of the theme park market, yet they have a commanding grip on the top of that market. Disney could easily afford to create hundreds of additional theme parks, but to do so would diminish, rather than enhance, their product’s appeal.

Apple Watch
Apple Watch Edition ranges in price from $10,000 to $17,000 and it is unquestionably a Veblen Good. The price of the Apple Watch Edition is significantly greater than the price of the Apple Watch Sport and the Apple Watch without a corresponding increase in quality or functionality.

The Apple Watch has displaced Rolex on a list of luxury global brands, as measured by analytics firm NetBase…. ~ Luke Dormehl, Cult Of Mac

I don’t know whether the Apple Watch Edition will actually out-luxury Rolex watches, but I do know Rolex is exactly the type of Veblen good that the Apple Watch Edition is competing against.

Next Week

There are two great rules of life: never tell everything at once. ~ Ken Venturi

Next week I will focus on Apple’s Branding. Is the iPhone truly deserving of its Premium status or is it merely using the smoke and mirrors of marketing to fool us into believing it is a premium product? Or perhaps the iPhone isn’t a Premium product at all, but is a Veblen good instead. Join me next week at which time I will fail to answer these questions and many, many more.

Podcast: Virtual Reality, GoPro and Alphabet Earnings

This week Bob O’Donnell, Ben Bajarin and Jan Dawson debate the opportunities and challenges for virtual reality, and discuss the recent earnings results from GoPro and Google’s parent company Alphabet.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

What is LTE-U and Should You Care?

What if wireless operators had the ability to add substantial capacity to their networks without having to pay billions of dollars for additional spectrum? And what if this also made your WiFi work better? Well, there has been quite a bit of work going on behind the scenes on something called LTE Unlicensed (LTE-U) and 2016 is going to be a pivotal year. Just this week, the LTE-U community cleared an important hurdle, with the FCC authorizing live tests of LTE-U at two Verizon facilities. So what is LTE-U, what is its status, who is it for, and what are its prospects for success?

LTE-U is a proposal to use commercial 4G LTE cellular services as we know them today, in 500 MHz of the 5 GHz unlicensed band, which is also used for WiFi. It is considered ‘unlicensed’ because the operators would not have to buy separate, licensed spectrum in order to operate LTE-U services. However, LTE-U only works alongside licensed wireless services. LTE-U is also referred to as Licensed Assisted Access (LAA), which represents the 3GPP’s effort to incorporate LTE-U into Release 13 by requiring Listen Before Talk, which is a standardized effort to ensure there is no interference with WiFi before LTE is invoked. LTE-U is sort of a “pre-LAA version” of LAA that might be permitted in the U.S. and some other countries.

LTE-U promises about 2x the range and capacity of current WiFi. The idea is it would work like WiFi but better (hence the term “carrier WiFi”) and as a seamless extension of cellular services on LTE-U equipped phones. For operators, LTE-U is part of their overall “carrier aggregation” strategy, which combines channels across their spectrum holdings to deliver higher speeds and more capacity. Operators gain the capacity at LTE-U venues while keeping traffic on their network, with the additional benefit of offloading traffic from the macro. LTE-U would be used mainly for indoor locations and venues such as hotels, stadiums, universities, and casinos. On the equipment side, LTE-U compliant small cells/APs would have to be installed by the operator or venue owner. Only new phones and other devices equipped with an LTE-U chipset would work.

Qualcomm has been spearheading the LTE-U movement, clearly with the objective of selling more chipsets (on the transceiver and user equipment sides). The LTE-U Forum and a consortium called Evolve, consisting of leading equipment vendors including Qualcomm, Ericsson, Samsung, and several carriers, have been pushing to move things along in industry and with the FCC. The main obstacle is the potential interference with WiFi in the 5 GHz band. The WiFi crowd has been instrumental in securing increased channels for WiFi at 5 GHz from the FCC over the past couple of years and is concerned about the potential for interference from LTE-U. This is a valid concern, especially because LTE-U is the “stronger party” — sort of like a 6 foot, 250 pound football player getting on a field where everyone else is 5’8″ and 175 pounds. Qualcomm and others have done a lot of work to address the potential for interference and have shown many LTE-U simulations in an attempt to dispel these concerns. At a high level, the idea is LTE would dynamically shut itself on & off in the unlicensed band, depending on the level of contention for WiFi.

In the U.S., Verizon and T-Mobile are the two operators leading the charge on LTE-U. It appears 2016 will be a year of testing and proving. There are still contentions between the LTE-U community (which has formed the Evolve alliance) and the WiFi community (WiFi Alliance, WiFi Forward) in terms of who is doing the testing, how it will be done, and what are the parameters required to make LTE-U acceptable on both the network and the user experience side. Then there is the FCC approval process, where the exact procedures, requirements, and timing are still uncertain. It’s not even that clear to me the extent to which the FCC would actually have to approve LTE-U.

On the equipment side, it looks like some of the network products will be available in 2Q/3Q. Another key is whether—and when—LTE-U will be available in handsets. The device OEMs have said very little publicly. Between the tests, approvals, and equipment commitments, certifications, and lead times, commercial LTE-U services aren’t likely before mid-2017 and probably into 2018.

But the real question is what the market is for LTE-U and what its role is given the continued expansion of alternatives, such as the better coexistence of WiFi and cellular with Hotspot 2.0/Passpoint and the deployment of more small cells, both indoor and outdoor. Certainly for the operators, LTE-U represents a cost effective way to expand capacity, especially for video, which is highly consumptive of the downlink bandwidth. It’s also a way to more fully leverage the Carrier Aggregation capabilities of LTE Advanced, which is an important part of the LTE roadmap for the next few years, in advance of 5G. And if Sprint (which is not a backer of LTE-U) is truly able to roll out a high-speed, high capacity ‘LTE Plus’ wireless network in certain cities, leveraging its 2.5 GHz spectrum, then LTE-U could be a counter weapon available to its competitors.

The full business case for LTE-U has still not been fleshed out. For example, what will be the division of cost responsibilities between the operator and the venue/enterprise for equipment? This is an issue that has held up the deployment of indoor small cells and other coverage/capacity enhancers. On the other hand, small cells that incorporate LTE-U in addition to LTE and WiFi could expand the use cases and market potential of small cells.

Even though the business case is a bit tenuous, we should also view LTE-U as a solution not in isolation, but in the increasingly Venn Diagram-esque world of wireless network solutions that include and incorporate cellular macro cells, small cells, and WiFi. For example, more work needs to be done to improve the user experience of moving between cellular and WiFi (see my November piece, “Cellular and WiFi Need to Get Along Better”), which is hopefully being incorporated into some of the work on LTE-U.

Additionally, the work being done on interference mitigation by Qualcomm and others will likely have broader benefits in a world of denser, heterogeneous wireless networks that include macro cells, small cells of various stripes, and WiFi. Mutual coexistence between licensed and unlicensed services, more effective management of traffic between cellular and WiFi, and a more seamless customer experience might be the ultimate beneficiaries.

Valuations and Trajectories

This week saw Alphabet pass Apple for the title of most valuable company, only to have the situation reverse again later in the week. In the brief period when Alphabet was out in front, I got a lot of calls and emails from reporters asking for comment about the significance of the change in leadership. By the time you read this, that lead may well have changed again and that’s really emblematic of this situation – it’s not as important as the many headlines this week would have you believe.

An entirely symbolic transition

The press loves these transitions because they’re wonderfully symbolic and it gives them hooks to hang much broader-based pieces on companies rising and falling in the public imagination. But symbolism is all that’s really going on here – there is no broader meaning to the valuations of specific companies and specific points in time beyond that symbolism. In fact, it’s arguably entirely coincidental, since investors don’t think in terms of relative total market capitalizations at all, but rather in terms of absolute value for specific companies.

What really matters is trajectory

This is true for any stock, but it’s particularly true for tech stocks: what really matters is trajectory. What I mean is the specific numbers any particular company reports in any given quarter are secondary – what investors care about is how these results look in the context of historic results and what the company is suggesting will happen next. In other words, it’s not so much about where you are now, but the difference between where you’ve been and where you’re heading. If the direction of your company is upward from a revenue and profitability perspective, you’ll garner a much higher valuation than if it’s downward, or flat. Hence, Apple’s record-breaking results were met with indifference, while its March quarter guidance further hit a stock that already had a fair amount of pessimism baked in.

On a long term basis, a company’s valuation should reflect its trajectory and this generally holds true. However, there’s also a fairly significant discount on stocks where the trajectory seems clear but may involve significant risk. Sorry to use Apple as an example again, but it’s the poster child for this phenomenon – Apple’s valuation multiples are much lower than comparable stocks precisely because investors believe (rightly or wrongly) there’s more risk involved in assuming its current trajectory will continue, compared with stocks with seemingly more predictable revenue and profit run rates. This is a big reason for Apple’s recent emphasis on its services business and the power of its installed base – Apple wants investors to see that at least part of its long term trajectory is more predictable than it appears because of its ability to generate additional revenue off the back of its installed base of a billion devices.

On a short term basis, expectations matter enormously too

Although trajectory is arguably the single most important factor in valuations, another important factor in the short term is expectations. Companies’ formal guidance often provide the basis for this but companies that become known for overly conservative guidance may fall victim to over-exuberance on the part of financial analysts. The downside is companies that perform exceptionally well may nonetheless be punished for failing to meet analyst consensus. Amazon and Microsoft’s results last week were a wonderful illustration of this point, especially because they were announced almost simultaneously. On paper, Amazon’s results were stellar – huge year on year growth, improving margins, a growing and increasingly profitable cloud business, and so on – but its stock was hit hard because the results were below what analysts were expecting. By contrast, Microsoft’s results were fairly humdrum, with a number of reasons for pessimism, but analysts cheered them anyway because they weren’t as bad as expected.

Valuations aren’t absolute

Ultimately, the stock market valuations of companies reflect analysts’ assumptions about the trajectories of those companies, their sustainability, and their predictability, rather than an objective measure of the current value of those companies. As such, comparing instantaneous valuations for two stocks and drawing broad conclusions from such comparisons is ridiculous on the face of it. And yet, we see so many overblown pieces about the significance of these transitions, all of which look a little silly the morning after, when the first- and second-place companies have switched places again. Yes, it’s worthwhile to have a conversation about why one company seems to be rising and another falling, but focusing too much on who’s in first place risks missing the point.

Rethinking the Value of the iPad Pro and Surface Pro

Ever since 2-in-1s were introduced about five years ago, I have been cool to this form factor. The idea of marrying a tablet and a keyboard was a disconnect for me. Part of the reason is I am a diehard laptop user and my history with laptops probably skews my thinking. My love affair with laptops goes back to the early 1980s when I used a pseudo-laptop, the Tandy TRS 80, as a portable word processor. Then in 1984, I was asked to work with IBM’s research group on what became their first full-fledged laptop. Ever since, a laptop has been my go-to computing device. I have used desktops from time to time but since I travel so much, a laptop really became my primary computing tool.

My initial foray into the 2-in-1 concept was Lenovo’s Yoga. Since the screen did not detach, I just used it as a traditional laptop, never using it in tablet mode. My first real experience with a tablet/keyboard combo came through Microsoft’s first Surface product. That experience was so bad, it put me off this type of device for a couple of years. I have to admit, I did use a Bluetooth keyboard with an iPad from day one and liked the mobile flexibility it gave for things like email and note taking. However, given my computing needs, it never replaced my laptop since I couldn’t do the kind of heavy lifting a true laptop can deliver.

Over the last two years, the technology to deliver a more robust 2-in-1 experience has gotten much better. There are two products on the market I think might be pointing us to the next major shift in portable computing. Microsoft’s Surface Pro and Apple’s new iPad Pro are, at the moment, the best of breed 2-in-1s. I will probably be dragged, kicking and screaming, toward these new designs. But after using both for some time now, I am starting to warm up to them.

I like the design and even the keyboard on my Surface Pro 3. Windows 10 makes this 2-in-1 a good mobile computer. It delivers the power of Windows and, even with its lack of touch-based apps, I can see how it could be a laptop replacement for some people.

But I still struggle with its layout and smaller fonts that come standard. I’m not convinced it could replace my Dell XPS 13 or my Lenovo Yoga as my full time portable computer. Call me old school when it comes to the Windows platform but traditional laptops still seem a better fit for my needs.

As for the iPad Pro, at first I really struggled with such a large iPad and its keyboard case. Although I am a power user on the Mac OS platform, I spend most of my mobile digital time on an iPad Air and am even more proficient with iOS. As I said, I used an iPad with a Bluetooth keyboard so I sort of knew what the experience would be like. But to be honest, I was not a big fan of Apple’s keyboard case and could never become comfortable using its keyboard. And since I am artistically challenged, even the pen was of little use to me. I did try to take notes using the pen but I can’t read my own handwriting so that was not a big draw either.

But by using my favorite Bluetooth keyboard, the iPad Pro started to live up to its potential for me. The 13″ screen is equal to the ones on my Dell and Lenovo laptops and, since I use iOS so much, using touch to navigate was very easy for me. Apple making iOS a rich mobile OS and adding great new features to it makes the iPad Pro a powerful alternative to my MacBook and Windows laptops.

When Tim Cook announced the iPad Pro, he said it could do as much as 80-90% of what anyone could do on a traditional laptop. On a recent trip, I decided to try that theory out. I only took my iPad Pro with me and used it as if it was my MacBook or a Windows laptop. I found, in general, Cook was right. I wrote email, texted, took notes, wrote and edited my columns. I used it for video, movies, music and Skype videos with clients. My only problem is it does not cut and paste as easily as it does in Windows 10 or Mac OS X, but that was minor compared to is actual ability.

Intel and Microsoft have a goal to make 2-in-1s and convertibles, or products like Lenovo’s Yoga, as much as 50% of the overall laptops shipped by the latter part of this decade. They are convinced the flexibility a tablet/laptop combo gives a person is so valuable that they, along with their OEM partners, are working hard to make these devices sleeker, more innovative and flexible so that even diehard laptop users may want to move over to this new form factor at some point in the near future. While I was skeptical of this goal at first, I am now starting to come around to this way of thinking and can see that perhaps a 2-in-1 or convertible really is the future of mobile personal computing. The jury is still out on how successful Microsoft, Intel and their partners will be in getting the majority of laptop users to move to these new form factors. But, after resisting it myself for some time, I can now see the value of 2-in-1s and convertibles and how they could become an important mobile computing hardware platform in the future.

What if Twitter Died?

Scrolling through my Twitter feed, I had an interesting thought the other day. What if Twitter just didn’t exist anymore?

Given all the recent troubles the company’s been going through, it’s no longer a completely unreasonable scenario.

Yet, for many in the tech industry, I have to imagine that borders on the unthinkable. Not because the practical realities of the business fading aren’t a possibility, but because so much of their lives are caught up in the Twittersphere. For some, it would almost be like losing a leg—or worse. I mean, there are people and, indeed, entire companies whose very existence and value seems to be directly tied to their level of influence on Twitter—and not much else.

To its credit, Twitter has managed to create more than just another social network. The micro-blogging service has morphed into something many people seem to have nearly built their life around. But for all of its attraction and pull, it can also be an incredibly addictive time suck that regularly draws people into minutes (or even hours) of distractions. Sometimes leading to useful discoveries but more often than not, feeling like a waste of time.

To be fair, Twitter is an extremely valuable service for discovering news in real time, finding out what people have to say, and, as a writer, promoting what I and my friends and colleagues have written or participated in to a wider audience.

But nearly ten years in, the service has also become a shouting gallery for “traditional” celebrities and a lot of people in the tech industry who somehow believe Twitter has made them celebrities.

Harsh? Sure. Reality? I think so.

In fact, this seems to be one of the fundamental problems of Twitter. It’s appealing to Hollywood, TV, music and sports celebrities as a means to interact more intimately with their fans and share the kinds of details they’d never provide to traditional celebrity media. It’s appealing to the tech industry as a mouthpiece for those who want to determine the course of what is or isn’t important. The digital taste-setters, so to speak.

But for mainstream business and consumer users? Not so much. Arguably, this is the biggest problem with Twitter—it can’t seem to stretch beyond its celebrity, celebrity follower, and tech roots. If you aren’t into celebrities or the tech industry, Twitter just isn’t that appealing, especially given all the other options for online social interactions.[pullquote]If you aren’t into celebrities or the tech industry, Twitter just isn’t that appealing, especially given all the other options for online social interactions.”[/pullquote]

Despite these points, I think the navel gazing value of Twitter to the tech industry is so high, I seriously doubt they’ll let Twitter actually die. Someone with enough money and enough self-interest will likely make sure that, no matter what, Twitter will continue in some shape or form. Eventually, it’s value may start to fade, as some have already started to argue, but at least the Twittersphere will have a few years to adapt and find new alternatives.

The fundamental challenge is a publishing service that’s essentially based on self-promotion, self-aggrandizement, and self-importance at some point is going to run into the wall of indifference. Not everyone cares to read about what the self-elected are all doing all the time.

Real time publishing, real time interactions, and real time discovery are all incredibly important capabilities, especially in today’s split second society. But there is an increasingly wide range of alternatives for people to leverage and it’s not entirely clear to me that Twitter has all the tools it needs to weather the current climate.

As a reasonably long time, regular user of Twitter, I would be sad to see it go, but that doesn’t mean I can’t imagine life without it. I can and, increasingly, it seems many others are starting to see that potential too.

A Netbook, an iPad Pro, and the Surface Walk Into a Bar

(This article was originally posted for our Tech.pinions subscribers on Nov 12, 2015. We are reposting it today to give you a sense of the type of content we are providing to subscribers. We encourage you to visit this link to subscribe.)

It’s not something I talk about often, but I was right in the middle of the Netbook debacle. The Netbook category was an accident. It was not Intel’s intention to have a small, not very powerful, yet cheap “PC” enter the marketplace. Asus took a chip Intel wasn’t positioning for a clamshell form factor and made a tiny PC that ran Linux. While initial sales of this product were not large, other OEMs caught on and wanted to ship Windows on it. Both Intel and Microsoft thought this was a good idea to get new hardware onto the landscape but both of them prefaced this thinking with the caveat that these machines could not be “full powered” PCs. Meaning it needed to be clear they could not do everything a full powered PC can do.

From the outset I told both companies, in my analysis notes to them, this was a bad idea. It would uncover the dirty secret that most consumers do not do very much with their PCs. My firm had just done some dedicated research on PC behavior in consumer markets and the data we discovered at the time gave us the insight that consumers, on average, use five pieces of software regularly on their PCs and none of them were CPU intensive tasks. My fear was these machines would be viewed as good enough for most mass market consumers and threaten the PC category as a whole with steep ASP declines. No one believed me. Sure enough, the chips got a little better on Netbooks, enough to watch good quality videos without skipping, for example. Microsoft eased up and let more of the capabilities of Windows on the hardware and boom, 40m devices at its peak of PCs under $200.

To add some perspective here, note on this chart of PC sales sliced by consumer and enterprise PC sales, the peak year for consumer PC sales also was the same year Netbooks peaked.

Screen Shot 2015-11-10 at 3.49.56 PM

Microsoft and Intel reacted quickly to this, with the help of some smart guidance, and brought this back under control and essentially killing the category. But what the Netbook fiasco did was let the cat out of the bag — consumers are not pushing the limits of their PCs. They are doing simple things like watching movies, browsing the web, checking email, messaging friends, etc. They aren’t creating the next major novel, they aren’t exporting cells from Excel. They aren’t making a two hour Hollywood motion picture. Their needs are simple and the Netbook, an underpowered, small, cheap, internet connected, clamshell PC was good enough for them.

I tell you this because it applies to how I think about the positioning of the iPad Pro and the Surface Pro 4. No, I don’t think either of those products are anything like the Netbook. Quite the contrary. However, both represent the needs of and an opportunity for two different markets. The Surface brings all the things a hard-core, technologically literate PC user needs in an ultraportable form factor. You can do everything a tech literate can and push the boundaries with computing tasks those users want. You can plug it into an external monitor and do even more. The Surface is a PC and exists as a form factor option for those who know how to use and drive a PC like a pro. But remember what I said about the Netbook. That PC user, who can drive a PC like a pro, is not the mass market. Not even close. That’s where I’ve always felt the iPad comes in.

The iPad is certainly more powerful than a Netbook and the software much more capable than ever it was on a Netbook. However, a central question I was wrestling with during the brief Netbook era was, why are consumers not doing more with their PCs? Even those who had a top of the line notebook or desktop in that era were still only using a small fraction of its capabilities. What I uncovered was they simply didn’t know how. The PC was too complex, too burdensome, they were afraid of breaking it then having to spend hours on support trying to fix it. For many consumers we studied and surveyed at the time, they did not have positive things to say, generally, about their PC experience.

Then, the smartphone hit the scene. The harsh reality is mainstream consumers do more with their smartphones to utilize their max capabilities today than they ever did with their PCs, then and now. I think this is a tragedy. Not because of all the things they do with their smartphone and not a PC, but because humans are capable of so much more with digital tools and creativity. Yet most don’t engage in it. Hardware and software companies need to give consumers the tools to easily, and I stress easily, use these tools to their maximum potential. Desktop operating systems, like Windows and OS X, are for the professionals. Mobile operating systems are for the masses. The promise of something like the iPad and the iPad Pro, and even where Android can go on tablets, or laptops, or even desktops, is to empower the masses to do MORE than they can on their smartphones with a computing paradigm that focuses on simplicity but still yields sophisticated results.

Part 1: Android is a Stick Shift and iOS is an Automatic Transmission

On January 10, 2016, I wrote an article entitled: “Platforms — Past, Present and Future“. The comments to the article made it clear to me that there was a great deal of confusion surrounding the role that branding plays in tech. This really got me thinking, and what was supposed to be a short, one-off article, morphed into the brutally long 4-part series you see before you.

A healthy male adult bore consumes each year one and a half times his own weight in other people’s patience. ~ John Updike

Today’s article uses an analogy to examine why Android does not seem to neatly fit into any one branding category. The series goes rapidly downhill from there and then sort of peters out altogether.

Enjoy!

STICK SHIFT vs. AUTOMATIC TRANSMISSION

I think the disconnect between Android Advocates and iPhone fans can best be explained by using an historical analogy.

When automobiles first appeared on the market, they all had stick shifts (manually operated transmissions). Stick shifts used a driver operated clutch engaged and disengaged by a foot pedal for regulating torque transfer from the engine to the transmission along with a gear selector operated by hand.

images-130

Although the automatic transmission was invented in 1921 it didn’t really become popular until the 1950’s and 1960’s and it didn’t become the standard until the 1970’s.

In the 1960’s, vehicles with automatic transmissions had advantages over stick shifts, but they had disadvantages too. Vehicles with automatic transmission were:

1) Easier to use; but they
2) Cost more to buy; and they
3) Cost more to fuel.

As a result of these tradeoffs, the automobile marketplace broke into three distinct types of buyers.

1) Premium customers who valued the convenience of an automatic transmission more than the money it took to buy, fuel and maintain their more expensive vehicles.

2) Value customers who might have aspired to own an automatic transmission vehicle but who either couldn’t afford one or who didn’t think the increased convenience was worth the increased cost.

3) Aficionados ((a person who is very knowledgeable and enthusiastic about an activity, subject, or pastime)) who far preferred the control and power provided by the stick shift over the ease of use provided by the automatic transmission.
images-133

ANDROID vs. IPHONE

Car buyers prior to the 1950’s were analogous to PC buyers prior to the introduction of the iPhone in 2007.

— Prior to the 1950’s, automobiles were mostly equipped with manual transmissions and one had little choice but to use a stick shift.
— Prior to 2007, personal computers were mostly notebooks and desktops and one had little choice but to use the Microsoft Windows operating system.

In 1995 there were 250 million PCs on the planet. Almost every one of them was owned by an early adopter, a tech enthusiast, and were either purchased by a business or for a business purpose. ~ Benedict Evans

— Automobiles with stick shifts suited the avid automobile owner just fine, but it suited the casual, non-expert automobile owner not at all.
— Personal computers with Microsoft Windows suited the avid computer user just fine, but it suited the casual, non-exert personal computer owner not at all.

images-138

The casual car driver and the casual personal computer user didn’t choose to use the stick shift or the Windows operating system. They had to use them, so they tolerated them.

There was no golden age when everyone was programming their own computers. Everyone who *had* a computer programmed it. Not the same thing. ~ Fraser Speirs on Twitter

Just as trucks evolved into cars and cars gave us a choice between manually operated stick shifts and automatic transmissions, desktop computers running Microsoft Windows evolved into touch operating systems which gave us a choice between Android phones and iPhones.

The advance of technology is based on making it fit in so that you don’t really even notice it, so it’s part of everyday life. ~ Bill Gates

People really don’t have to understand how computers work. Most people have no concept of how an automatic transmission works, yet they know how to drive a car. ~ Steve Jobs

As an aside, it should be noted that neither trucks nor desktops are going away. They still exist and they still do the heavy lifting in their respective fields. They’re just no longer the dominant players.

Old tech has a very long half-life. ~ Benedict Evans on Twitter

images-5

iPhones have advantages over Android phones, but they have disadvantages too. iPhones are:

1) Easier to use; but they
2) Cost more to buy; and their
3) Apps cost more to buy.

As a result of these tradeoffs, the smartphone market has broken into three distinct types of buyers.

1) Premium customers who value the convenience of the iPhone more than the money it takes to buy and maintain it.

2) Value customers who might, or might not, aspire to own an iPhone but who either couldn’t afford one or who didn’t think the increased convenience was worth the increased cost.

images-134

3) Aficionados who far prefer the control and power provided by Android over the ease of use provided by the iPhone.

images-135

Disdain

The phones using the Android operating system appeal to both the high and the low end of the smartphone buying spectrum. Both types of Android buyers view iPhone iFans as iFools, but believe iFans are iFoolish for very different iReasons. The high-end Android owners have disdain for the Apple hardware, software and ecosystem. The low-end Android owners have disdain for Apple’s prices.

There is no love lost between us. ~ Miguel De Cervantes

images-132

images-131

This is hardly a one-way street. iPhone iFans, in turn, have disdain for both the high-end Android geeks, who don’t know what they’re missing out on, and the low-end Android value shoppers, who settle for less.

Let’s face it. If the high-end geeks were really intelligent, they’d be using iPhones.

If the French were really intelligent, they’d speak English. ~ Wilfrid Sheed

And if the low-end discount devotees had any taste, they’d be using iPhones too.

Question: What’s the difference between an Android owner and a catfish?

Answer: One is a bottom-dwelling, scum-sucking scavenger. The other is a fish.

The upside to being an iPhone owner is enormous. You get to look down on so very many different types of people. But the downside of having better taste than everyone else, is that people seem to think you are pretentious.

Never criticize iPhone owners. They have the best taste that money can buy. ((Never criticize Americans. They have the best taste that money can buy. ~ Miles Kington))

Personal Choice and Mr. Market

It requires less character to discover the faults of others than is does to tolerate them. ~ J. Petit Senn

So which is better: stick shifts or automatic transmissions; Android or iPhones?

There’s two kinds of people in this world: those who think their opinion is objective truth, and… there’s one kinds of people in this world. ~ Joss Whedon on Twitter

It doesn’t have to be either-or. It can be, and is, merely a matter of personal preference.

A man is getting along on the road to wisdom when he begins to realize that his opinion is just an opinion.

What a bunch of selfish jerks we are, assuming that what we personally like should be liked by all.

Selfishness is not living as one wishes to live, it is asking others to live as one wishes to live. – Oscar Wilde

Besides, it’s the marketplace — not Android advocates or iPhone iFans — that is the ultimate arbiter. Every time you spend money, you’re casting your vote for the kind of world you want. But every time someone else spends money, they are casting their vote for the kind of world they want too. And unlike political elections, multiple candidates can win.

The great thing about capitalism is that we all get to decide for ourselves what products are necessary, important, trivial or pointless. ~ Benedict Evans (@BenedictEvans)

In smartphones, we have at least two clear winners: Android and the iPhone.

We find comfort among those who agree with us – growth among those who don’t. ~ Frank Howard Clark

So if you don’t like the Android ‘stick shift’, you can always use an iPhone ‘automatic transmission’ instead.

And if you don’t like the iPhone ‘automatic transmission’, you can always use the Android ‘stick shift’ instead.

And if you don’t like either of the choices that the free market has provided, there’s always a third alternative:

You can stick it.

Next Time

There are two kinds of people in this world: Those who need closure and

The next article in the series will be on Tech Branding. The third article will be on whether the iPhone qualifies as a premium brand, a luxury brand or a Veblen good. The fourth article will ponder whether the iPhone’s brand could survive a double-blind ‘taste’ test, and whether the iPhone’s brand is Coke, New Coke, Pepsi, of just a lot of caramel colored carbonated water.

Podcast: Kantar Smartphones; Apple, Microsoft, Facebook and Amazon Earnings; FCC Set Top Box Rules

This week Bob O’Donnell, Ben Bajarin, Jan Dawson and special guest Carolina Milanesi of Kantar WorldPanel discuss Kantar’s latest smartphone data, analyze the earnings reports from Apple, Microsoft, Facebook and Amazon, and debate the potential impact of the FCC’s efforts to open up standards around set-top boxes.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Augmented Reality Enterprise Opportunity

The Consumer Electronics Show (CES) in Las Vegas this year catapulted virtual reality (VR) into mainstream consciousness. VR clearly has an interesting future ahead, primarily as a consumer-centric gaming and media play. But when I think about the future of work, augmented reality (AR) is where the greatest opportunities materialize.

Before diving in, it’s important to define the differences between these two new “realities”. VR is an entirely immersive experience. When you put on a VR headset, either a purpose-built device such as the Oculus Rift or HTC Vive, or a phone-based product such as Samsung’s Gear VR or Google’s Cardboard, you leave behind your current, physical reality. The headset completely obscures your actual surroundings, placing you in a virtual setting. While some VR products give you the ability to move physically around in a confined area, engaging in VR is primarily an activity that you do in a fixed location. In other words, you don’t walk down the street with a VR headset on your head.

AR hardware is different from VR in that you can still see your surroundings when you’re wearing an AR headset because it augments your physical surroundings with digital information instead of replacing it. A rudimentary form of augmented reality is a heads-up display (HUD) such as the original Google Glass, which simply displays information on a small screen near your eyes. Microsoft’s HoloLens represents a more advanced form of AR, sometimes called mixed reality, in which the technology places digital objects into your field of view where you can then manipulate them. Microsoft calls these holograms. So for example, in one Microsoft HoloLens demonstration, the wearer can build digital Minecraft structures on to a real, physical table in front of them.

The ability to create this level of augmented reality requires a great deal of compute power, as the device is processing a large number of constantly-changing variables (I’ll dive into this topic, as well as the challenges around input/output, more fully in a future column). Because you can still see your surroundings, a person can wear an AR headset when walking down the street.

The Social Challenge

While you can wear AR outside, it’s not always wise to do so. You’ll remember that many people had a strong adverse reaction to seeing others wearing Google Glass in public. Some didn’t like the privacy implications of a person with a head-mounted camera looking back at them; others felt wearers were rude and distracted (akin to somebody constantly checking their smartphone). However, the use of augmented reality in workplace scenarios allows the technology to largely sidestep these social ramifications. Just as we expect co-workers to use their computers when in the workplace, it’s reasonable to expect colleagues with AR gear to use it there, too.

And that’s a key takeaway when we think about the future of AR in the workplace. It’s going to be a new tool. Just like the typewriter gave way to the personal computer, for some workers an AR device will replace the sometimes awkward use of a notebook, tablet, or smartphone. For others it will be additive, used for new processes and sometimes for client-facing scenarios. As I talk to companies working in this space, the theme that consistently emerges is they are creating products that will fundamentally change the way future generations will get work done. It’s not just about creating gee-whiz visuals; it is about driving new ways of thinking, creating, and demonstrating ideas. And, while it’s less sexy to talk about, it’s also going to contribute to creating more efficient and safe workplaces.

Important Early Verticals

The potential impact of AR in business is staggering. At IDC, we’re currently building out our first forecast, but it is clear that in the long-term view there are potentially very few businesses that won’t be impacted by AR technology. In the near term, however, there is a short list of key verticals where I expect AR to land first. They include:

Healthcare: AR will impact healthcare in many ways, from how medical students learn about the human body, to consults with doctors in far off places, to less-invasive surgeries driven by a physician’s ability to “see” into a patient without opening them up. Obviously, any device that makes its way into hospital settings must pass stringent requirements that vary from country to country.

Design/Architecture: One of the key challenges for any designer is to get into the physical space of the object or structure they are creating. Today, most create three-dimensional objects on two-dimensional screens. AR will change this and, just as importantly, it will give designers the opportunity to display their work to prospects and clients in a complete fashion, too.

Logistics: Running a massive warehouse or shipping center well is all about creating improved efficiencies and safety for workers. For example, AR can help lead a warehouse worker safely and quickly to the right location for picking up an item at which point the system can remove the item from inventory then send the person and the item to the right place for packing and shipment.

Manufacturing: Imagine a company with multiple factories in different locations, but just a few mechanical engineers tasked with keeping the lines running inside those plants. With AR, a company could employ technicians wearing the technology in each plant, allowing the higher-paid mechanical engineers to view and interact with the machinery from wherever they happen to be in the world. AR headsets can also serve up blueprints, instructions, and real-time data, freeing workers to use both hands for the task.

Military: From training to battlefield communication to medical to enhanced situational awareness, AR is likely to play a crucial role for the military. Of course, the barrier to entry here is quite high, as you can’t send troops into hostile situations with untested equipment.

Services: This is likely to be the area where most consumers gain their first access to high-end AR technology. Imagine a situation where you rent both the equipment necessary to do a home improvement project as well as the AR equipment to gain access to an expert who walks you through the more complicated steps.

Next-Generation AR Applications

Over the course of the next few years, much attention will be paid to the AR hardware that ships and, of course, this is an important element. Without good hardware, you can’t have great experiences. But, as with the rise of smartphones, applications are where the rubber meets the road. There isn’t a clearly dominant platform in AR, yet, but all the usual suspects are at work here. Microsoft’s HoloLens is a fully-featured Windows 10 device. Many devices shipping today run Android or have a custom interface on top of it. I haven’t seen anything from Apple yet, but recent acquisitions point to the company exploring both AR and VR.

Creating an ecosystem where developers can build and sustain useful apps will be key. What works on a phone, tablet, or notebook screen will not necessarily work in an AR scenario. So, just like AR itself, the apps that drive it will require new ways of thinking about problems to be solved. It’s the traditional chicken and egg issue: You need great hardware to entice developers to create apps for that hardware, but you need great apps to get people and companies to invest in the necessary hardware. Expect to see and hear more about groundbreaking AR software in the near future.

Meanwhile, over the next 12 months, I expect virtual reality to get the lion’s share of media attention. At some point, though, the opportunity around AR will become abundantly clear. It’s not a matter of whether AR will have a significant impact on business; it’s just a matter of exactly when.

Absorb or be Absorbed

Back in December, Uber and Facebook announced a partnership which would allow users of both services to hail an Uber car from within Facebook Messenger. This deal was emblematic of a pair of strategic imperatives that’s becoming increasingly clear and, in this case at least, were highly complementary. The two strategic imperatives are these:

  • If you aspire to capture user attention, you must absorb as many other activities as possible into your domain
  • If you aspire to provide a single great piece of functionality, you have to make sure it’s as broadly available as possible within the various domains where users spend their time.

Absorb

Facebook is very much a company in that first category and, as I’ve written about before, it’s why Facebook is building what looks increasingly like a sort of Meta OS, a layer on top of true mobile operating systems subsuming more and more of their functionality. Instant Articles, Facebook Video, Messenger, the recently added M assistant, and so on are all examples of Facebook taking content or activities that have historically resided outside of Facebook and bringing them into the domain it controls and monetizes. As it brings more of these activities into its sphere of influence, Facebook keeps users within its walls and, while there, can continue to serve them ads. Even when it can’t keep them within Facebook-branded properties like the Facebook and Messenger apps, it can capture their attention in Instagram, WhatsApp, or Oculus virtual reality experiences.

Or be absorbed

For Uber, the challenge is it provides a single piece of functionality, which has historically resided in its own app. But users have no reason to open that app unless they’re ready to hail an Uber car, even though activities users perform in other apps might well serve as precursors to hailing a car. As such, its strategic imperative is to place functionality into as many other domains as possible, including the Facebook Messenger app. Since that time, it’s been working on a broader implementation of this strategy and earlier this month, it announced Uber Trip Experiences. It uses the Uber APIs to allow developers to create functionality within their apps that integrates with Uber in some way. Some examples are music apps that create playlists just long enough to keep you entertained during your ride, news updates to occupy you during the trip to your next meeting, recommendations for places to eat when you arrive at your destination, and automatically turning on your thermostat when you’re heading home. Uber’s functionality is fairly compelling as a standalone service and app but, if it’s to be truly useful to users, it needs to be absorbed and integrated into a myriad of other experiences on users’ smartphones.

Complementarity and competition

In the case of Facebook and Uber, these strategic imperatives came together in a wonderfully complementary way – Uber wants to be everywhere, and Facebook wants to absorb everything, so the two came together. Where the two imperatives become complicated is when companies that have been complementary in the past start to compete, especially if they both aspire to absorb rather than be absorbed. Facebook has benefited enormously from the rise of Android and iOS and their respective app stores and user bases. But it now aspires to absorb many of the activities those operating systems or other app vendors on those platforms have historically considered their domains and the owners of those operating systems are starting to fight back.

Apple has created its own music service to keep users in its domain when they’re listening to music, rather than entering a domain controlled by Spotify or Google. It’s also created a News app to capture user attention that might also go to Facebook and its increasing investment in iMessage is arguably a counterstrike to Facebook’s investment in Messenger. Apple’s efforts with Siri and Spotlight over recent years have been a reaction to Google’s dominance of search and a desire to avoid Google’s absorption of more user time on Apple devices. Meanwhile, Google is itself building a layer on top of its operating system which can keep users in the Google domain even while in other apps, in the form of Google Now on Tap. As Google, Apple, and Facebook’s absorption aspirations grow, they will increasingly find themselves competitors where they had been complements. The cracks in their various relationships will eventually begin to show.

Absorbers will become increasingly powerful

Those companies that seek to be absorbers rather than the absorbed will, in turn, gain increasing power over single-function applications, which will need to play by their rules in order to enter their domains. App developers are already accustomed to the vagaries of Apple and Google when it comes to playing in their app stores, but Facebook as an absorber is quite a different animal, and one they’ll have to learn to reckon with and work with. These absorbers will also therefore exert increasing power over their users, who will have to go through increasingly controlled and managed elements of their respective platforms to get to the content they want. This, in turn, will raise questions about the motivations of these absorbers and whether their incentives are aligned with those of their users.