Magic Leap and the Curse of the Overpromise

Magic Leap may well be the best known augmented reality company in the world that has yet to ship a product. Well-funded by a long list of backers, the company has often very publicly suggested that its technology is better than other AR market competitors with currently shipping products. Early on, the company famously showed a video on its Web site of a full-sized whale breaching out of the floor of a school gymnasium in front of dozens of spectators. This video-and others like it-set a very high bar in terms of expectations.

As the company slowly moves toward shipping its first actual product-it says the Magic Leap One Creator Edition (for developers) will ship this summer-it has been showing more real-world demonstrations of its technology to the world via Twitch-hosted live streams. Those demonstrations, it’s safe to say, have not lived up to those sky-high expectations.

Tech Demos vs. Product Creation
Part of the mystique around Magic Leap, which has survived the company’s share of negative press over the years, is that the small number of people who’ve seen the top-secret demonstration of the technology say its amazing. And it likely is. But because so few have seen it, and because those who do must sign nondisclosure agreements, we don’t know much about the setup of those demonstrations. It’s entirely possible that behind closed doors, Magic Leap can make lifelike whales appear. But does it take a roomful of hardware to make it happen? Can do they so with a consumer-friendly, shippable product? Out of the gate, the early answer certainly seems to be no.

In the most recent live stream, company representatives talked about the process of building out the first Magic Leap experiences and showed a simple animation of a golem that appears out of the horizontal plane to hurl rocks at you. Many on the live stream, and in the days since, have expressed frustration about the quality of this demonstration, and the company’s continued unwillingness to share details about the hardware’s supported field of view, pricing, and other specifications (although we do now know it will use an nVidia Tegra X2).

The reality is, what Magic Leap is doing in these live streams is showing the early building blocks of a new way of computing. Sometimes they call it spatial computing, other times, like Microsoft, they call it mixed reality. I consider it all augmented reality, but whatever you call it, it’s new, and it was never going to spring fully formed from Magic Leap’s labs. This process was always going to take time and the work of outside developers, too. But because the company came out of the gate with such brash claims, the world is finding its early reveals more than a little disappointing. And the company has no one to blame for this but itself.

That said, I find the live streams filled with interesting information. About how the device sees the room, and how it collects input from the user. In the golem demo, we watch a rock float past the user and crash into the ceiling. Think about that: A demo where the room becomes an aspect of the gameplay itself. It’s all very exciting and important. Unfortunately, the broader reaction is one of disappointment. It’s the curse of overpromising and underdelivering.

Impact on Broader AR Industry
Over the years I’ve been alternately skeptical and optimistic about what Magic Leap will ultimately ship. While the current demonstrations may not wow people the way its early videos did, what they show me is a company that is attempting to reset expectations, and that’s putting in the work to potentially build something very interesting.

And while Magic Leap and its executives have often been willing to make jabs at competition in the space, I’ve found most of those competitors are quietly hoping that the company eventually delivers a viable product. At present, the AR industry is relatively small, and most of the people inside the industry all know each other. A common theme among them is that they all see the technology as hugely important. And while each would like to be the ultimate winner, most think it’s good for the entire industry if a big, well-funded firm such as Magic Leap succeeds.

It also goes without saying that most AR headset competitors are eager to see the company ship an actual product, so they know what they are up against in the market. In some regards, it seems the rest of the industry has been holding off on launching new versions of their own products until the Magic Leap One ships. One example: Microsoft’s long-anticipated HoloLens 2.

Questions about Commercial
It’s pretty clear that those early Magic Leap videos promised more than the company can deliver, at least with its first product. My other big criticism of Magic Leap’s early strategy is that in addition to building an entirely new hardware and software platform, it also seems intent upon creating a fair amount of content, too. And that early content seems focused primarily on consumer, and not commercial users.

Anyone who follows the tech industry understands the allure of the consumer market and the scale it offers. But there are several reasons most AR technology companies are currently more focused on the commercial side of things. First, enterprise has already identified a long list of killer applications for AR, which include but aren’t limited to see-what-I see service-focused use cases, training and knowledge transfer, design and manufacturing, and sales and marketing. Second, forward-thinking companies are willing to spend real money on AR hardware and software if it drives a clear return on investment (and AR typically does). Finally, getting people to wear a goofy-looking headset is easier if their jobs requires it.

So while I can’t help but feel that Magic Leap has been too aggressive in trying to launch a hardware platform, a software platform, and a content studio all at once, I do wish the company was paying more attention to the commercial AR opportunities in the world. I can tell you that commercial-focused AR software vendorsI’ve talked to would love to have Magic Leap as an option for their products to run on down the road.

On the same day that Magic Leap announced it would ship the first developer kits this summer, the company also unveiled a partnership with AT&T, which will sell the headset in its retail storefronts. It’s an interesting arrangement in that there’s no indication that the product itself will have an LTE connection. But like many before it, AT&T clearly sees potential in Magic Leap. In addition to an investment from AT&T, the partnership means that later this year consumers will be able to go into an AT&T store to test out the Magic Leap One for themselves. I look forward to testing out the headset myself and finally getting a better answer to whether the technology has the potential to eventually deliver upon those early promises.

Answering the Critics of Device as a Service

The concept of Device as a Service (DaaS) has been gaining steam for a few years now, and my team at IDC has done extensive work around this topic. In fact, we’re currently wrapping up an extensive study on the subject that includes a massive multi-country survey of commercial adopters, intenders, and resistors, as well as a forecast that will include our view on the impact of DaaS on the commercial PC, tablet, and smartphone markets. While the momentum in this space is clear, there are still plenty of doubters who like to throw out numerous reasons why DaaS won’t work, and why it won’t bring about the benefits to both buyers and sellers that I’ve outlined in previous columns here and here. Let’s examine some of those criticisms.

There’s Hype, But Is Anybody Really Buying?
The hype defense is probably the most common pushback and question we get when it comes to DaaS, and it’s easy to understand why the average IT professional or even industry insider might be skeptical. But the fact is, we’ve now surveyed hundreds of IT Decision Makers (ITDMs), and talked to most of the major providers, and this isn’t just an interesting idea. We continue to find that DaaS is very appealing to a wide range of organizations, in numerous countries, and across company sizes. The idea that a company can offload some of the most mundane tasks that its IT department deals with while right-sizing the devices it deploys, gathers users analytics, and smooth’s out costs is very compelling. And as the industry has moved quickly from a focus purely on PCs to one that includes additional devices such as smartphones and tablets, interest and adoption will continue to grow.

It’s important to note that even a company completely sold on DaaS won’t make this type of transition overnight. Most companies will start small, testing the waters and getting a better understanding of what works for their organization. In the meantime, there’s existing hardware, software, and services contracts that could still have months or even years left before they expire. Like many things in technology, you can expect Daas adoption to happen slowly at first, and then very fast.

DaaS Costs Are Too High
One of the key areas of criticism leveled at DaaS is that today’s offerings cost too much money per seat. It’s hard to argue with this logic: If an organization thinks DaaS costs too much, then it costs too much, right? But often this perception is driven from an incomplete understanding of what a provider includes in the DaaS offering. Today’s contracts can run from just the basics to something much more complete. Yes, a contract with a full range of services such as imaging and migration, deployment and monitoring, break/fix options and secure disposal can be pricey. But what critics often fail to realize is that their company is paying for these services in some way or another today. Either they’re paying their own IT staff to do it, or they’re paying another service organization to do bits and piece of it (and they’re likely not tallying all the costs in one place). Alternately, some of these tasks—such as secure disposal—aren’t happening at all, which is one of those areas that could end up costing the company a lot more money in the end.

Now with all that said, it’s entirely possible that at the end of the day a company may well end up paying more for its entire pool of services under a DaaS contract. At which point, the questions they need to ask: Am I buying my DaaS service from the right vendor? If the answer to that question is yes, then the follow-up question should be: Are the benefits of managing all these services through a single provider worth the extra cost to my organization? Does it free my IT organization to do other important jobs? The answer may be yes.

Refurbs Will Negatively Impact New Shipments
One of the key benefits of DaaS is the promise that it will shorten device lifecycles which have always been, to me, one of the win/win benefits of this concept. Companies win by replacing employee’s hardware more often thanks to pre-determined refresh cycles. Instead of finding ways to keep aging devices around for “one more year” to push out capital expenditures, DaaS allows companies to equip employees with newer machines that drive increased productivity, offer improved security, and lead to improved user satisfaction. From the hardware vendor side, the benefits are obvious: faster refresh rates that become more knowable over time.

But what about all those PCs collected at the end of a two- or three-year DaaS term? Won’t they cannibalize shipments of new PCs? The fact is, today there’s already a huge business around refurbished PCs, tablets, and smartphones. What the DaaS market could do is create a much more robust, high-quality market of used commercial devices. As with the automobile leasing market, these devices receive regular maintenance, which means a higher quality used product. DaaS providers can redeploy (or sell) these into their existing markets at lower-than-new prices and still drive reasonable profits. Or they can target emerging commercial markets where even ultra-low-cost devices are a tough sell today.

Ultimately, I believe that DaaS will prove to be a net positive in terms of overall shipments for the industry. Even if that proves incorrect, I’m confident it will drive greater profitability per PC for vendors participating in the DaaS market.

DaaS Will Never Appeal to Consumers
It’s true that to date DaaS has been focused on the commercial segment, but its only a matter of time before we see consumer-focused plans come to market. Apple’s success with the iPhone Upgrade Program, where you pay a monthly fee that includes AppleCare+ coverage and the promise of a new iPhone every year, shows there’s already an appetite for this. It also proves that a robust secondary market doesn’t necessarily cannibalize a market (and Apple profits greatly from its resale of one-year-old iPhones). You can easily imagine Apple adding additional services to that program and extending it to include two or three-year upgrade paths for iPads and Macs.

And so it’s not hard to imagine the likes of Apple, HP, Dell, Lenovo and others eventually offering consumer-focused DaaS products. To many, the idea of paying a single monthly fee to one company to eliminate most of the hassle of managing their devices—and to ensure no budget-busting costs when its time to replace an old one—would be too good to pass up.

PTC Demonstrates Augmented Reality’s Real-World Value

This week I attended PTC’s LiveWorx18 conference in Boston, where the company demonstrated some of the ways its customers are leveraging AR technology today. PTC is an interesting company because it has a wide range of solutions beyond AR, and it has done a good job of telling a story that shows how industry verticals can utilize its Internet of Things (IoT) technology as well as its Computer Aided Design (CAD) products to drive next-generation AR experiences.

Vuforia-Branded AR
Back in 2015, PTC purchased the Vuforia business from Qualcomm. Vuforia is a mobile vision platform that uses a device’s camera to give apps the ability to see the real world. It was among the first software developer kits (SDKs) to enable augmented reality on a wide range of mobile devices, long before Apple launched ARKit or Google launched ARCore (today Vuforia works with both of those platforms). Today developers can use it to create AR apps for Android, iOS, and UWP. As a result, there are tens of thousands of Vuforia-based apps in the real world.

In addition to the Vuforia Engine, PTC also has software called Vuforia Studio (formerly ThinkWorx Studio) that lets use create AR experiences such as training instructions using existing CAD assets using a simple drag-and-drop interface (I’ve watched PTC executives create new AR experiences on stage during events using this software). Vuforia View (formerly ThingWorx View) is a universal browser that lets users consume that Studio-created content. And Vuforia Chalk is the company’s purpose-built remote assistance app that enables an expert to communicate and annotate with an on-site technician through an AR interface. Most companies today are utilizing PTC-based technology through mobile devices such as tablets and smartphones already present in the enterprise. But a growing number are testing on headsets from partners including Microsoft, RealWear, and Vuzix.

In addition to these shipping products, the company recently acquired new technology that it will deliver in future products that enable the creation of step-by-step AR experiences by a person wearing an AR headset (Waypoint) and to later edit that experience for consumption (Reality Editor). Training is one of the key use cases for AR across a wide range of industry verticals, and this type of software will make it much easier for companies to streamline knowledge transfer between experienced workers and new hires.

IoT Plus AR
I’ve long suggested that one of the powerful things about AR is that it has the potential to let us humans see into the Internet of Things. PTC demonstrated this ability during its keynote. It also showed a very cool example of moving a digitally created control switch from an AR interface to a physical world control panel (in this case, the notebook screen of an IoT-connected machine). The company also created a real, working manufacturing line on the expo floor that demonstrated the integration of IoT, AR, and robots.

There are plenty of companies doing good work in AR today, but one of the things that make PTC stand out is the fact that its software is straightforward to use, it helps companies leverage many of the digital assets it already has, and it promises to help them make sense of data generated by the IoT.

I attended several of the working sessions during the show, including one on connecting AR to business value. PTC isn’t just talking the talk: During that session, the presenter gave real-world advice to IT decision makers trying to utilize AR in areas such as service, sales, and manufacturing.

The Future Requires Partners
One of the things I like about PTC and its CEO Jim Heppelmann is that the company is confident in its product line but humble enough to know that partnerships are key to building out new technologies such as IoT and AR. In the weeks leading up the show, and on the keynote stage, the company announced strategic partnerships with companies including Rockwell Automation, ANSYS, and Elysium. And earlier this year it announced a key partnership with Microsoft (PTC even had Alex Kipman, Microsoft Technical Fellow, present the day-two keynote).

As a software company, PTC depends upon hardware partners to bring the next-generation of hardware to market. It knows that AR on mobile devices is powerful, but AR on a headset is game-changing for workers who need to use their hands to get work done. Like me, executives at PTC are eager–and a bit impatient–to see new hardware from companies such as Microsoft, Magic Leap, and others ship into the market. This hardware is going to be key to moving AR forward in the enterprise. I look forward to seeing what PTC and its partners can do with it once it finally happens.

Qualcomm Announces New Snapdragon for PCs, Kills Partners’ Near-Term Prospects

At this week’s Computex show in Taiwan Qualcomm announced the next generation of silicon for the Windows on Snapdragon platform. The new chip is called the Snapdragon 850, and rather than simply repurposing an existing high-end smartphone processor the company has cooked up a modified chip specifically for the Windows PC market. Qualcomm says the new chip will provide a 30 percent system-wide performance boost over the previous generation. I’m pleased to see Qualcomm pushing forward here, as this area will eventually evolve into a crucial piece of the PC market. However, announcing it now, with an eye toward new products appearing by year’s end, puts its existing hardware partners in a very tough spot.

Tough Reviews, And a Short Runway
Qualcomm and Microsoft officially launched the Windows 10 PCs powered by the Snapdragon Mobile PC Platform in December 2017. The promise: By using the Snapdragon 835 processor and related radios, Windows notebook and detachable products would offer instant startup, extremely long battery life, and a constant connection via LTE. Initial PC partners included HP, Lenovo, and ASUS.

Reviews of the three initial products have been mixed at best, with many reviewers complaining about slow performance, driver challenges, and app compatibility. But most also acknowledge the benefits of smartphone-like instant on, the luxury of connectivity beyond WiFi, and battery runtimes measured in days versus hours. I’d argue that the technical issues of rolling out a new platform like this were unavoidable. However, the larger self-inflicted wound here was that nobody did a great job of articulating who these products would best serve. This fundamental issue led to some head-scratching price points and confused marketing. I talked about the missed opportunity around commercial users back in December.

There was also the issue of product availability. While the vendors announced their products back in December, shipments didn’t start until 2018. In fact, while HP’s $1,000 Envy X2 started shipping in March, neither Lenovo’s $900 Miix 630 nor ASUS’s $700 NovaGo TP370QL is widely available even today. Amazon recently launched a landing page dedicated to the Always-Connected Windows 10 PC with a bundled option for free data from Sprint for the rest of 2018. The ASUS product moved from pre-order to available on June 7; Lenovo’s product still has a pre-order button that says it will launch June 27th.

That landing page appears to have gone live just days before Qualcomm announcing the 850 in Taiwan, and promising new hardware from partners-including Samsung-by the end of the year. Now, if I’m one of these vendors who threw support behind Windows on Snapdragon early, only to have Qualcomm Osborne my product before I’ve even started shipping it, I’m not a happy camper.

Might as Well Wait
As a frequent business traveler, the Windows on Snapdragon concept is very appealing to me. I realize that performance won’t come close to what even lower-end X86 processors from Intel and AMD offer, but I’m willing to make that trade for the benefits. As a result, I expect that for the first few years these types of PCs will be better as companion/travel devices rather than outright replacements for a traditional PC. In my case, I could see one competing for space in my bag with the LTE-enabled iPad Pro I carry today. Except when I carry the Pro, I still must carry my PC because there are some tasks I can’t do well on iOS.
Both the Lenovo and HP products are detachable tablets, whereas the ASUS is a convertible clamshell, which is the form factor I’m most eager to test. I was close to pulling the trigger on the ASUS through Amazon when the Qualcomm 850 news hit. Buying one now seems wasteful, with new, improved product inbound by the holidays. And that’s not the kind of news vendors want to hear.

Now many will say that this is the nature of technology, that something new is always coming next. And while that’s essentially a true statement, this move seems particularly egregious at a time when Qualcomm and Microsoft are trying to get skeptical PC vendors to support this new platform. Plus, we’re not talking about a speed bump to a well-established platform, this is a highly visible initiative with an awful lot of skeptics within the industry. Qualcomm might have decided that the poor initial reviews warranted a fast follow up; one hopes their existing partners were in on that decision.
Bottom line: I continue to find the prospects of Windows on Snapdragon interesting, and I expect the new products based on the 850 chip will perform noticeably better than the ones running on the 835. But if Qualcomm and Microsoft expect their partners to continue to support them in this endeavor, they’ve got to do a better job of supporting them in return.

Despite PC Market Consolidation, Buyers Still Have Plenty of Options

The traditional PC market’s long-term decline gets plenty of press, but one of the other less-talked-about trends occurring inside the market’s slide is the massive share consolidation among a handful of players. A typical side effect of this type of market transformation is fewer options for buyers, as larger brands swallow up smaller ones or force them out of business. While this has certainly occurred, we’ve seen another interesting phenomenon appear to help offset this: New players entering into this mature market.

Top-Five Market Consolidation
The traditional PC market consists of desktops, notebooks, and workstations. Back in 2000, the market shipped 139 million units worldwide, and the top five vendors constituted less than 40% of the total market. That top five included Compaq, Dell, HP, IBM, and NEC. Fast forward to 2010, near the height of the PC market, and units have grown to about 358 million units worldwide for the year. The top five vendors are HP, Dell, Acer, Lenovo, and Toshiba, and they represented 57% of the market. Skip ahead to 2017, and the worldwide market has declined to about 260 million units. The top five now represent about 74% of the total market and is made up of HP, Lenovo, Dell, Apple, and Acer.

Market consolidation in mature markets such as Japan, Western Europe, Canada, and the United States has been even more pronounced. In 2017 the top five vendors in Japan represented 77% of shipments; in Western Europe, it was 79%; in Canada, it was 83%, and in the U.S. it was 89%. Markets traditionally considered emerging, however, weren’t far behind. In the Asia Pacific (excluding Japan), the top five captured 69% of the market in 2017; in Latin American, it was 71%, and in Central and Eastern Europe plus the Middle East and Africa it was 76%.

Category Concentration
If we drill down into the individual device categories at the worldwide level, we can see that desktops remain the area of the market with the least amount of share concentration among the top five in a given year. In 2000 the top five represented 38% of the market; in 2010 it was 46%, and in 2017 it was 61%. Desktops continue to be where smaller players, including regional system integrators and value-added retailers, can often still compete with the larger players. In notebooks, the consolidation has been much more pronounced. In 2000 the top five represented 57% of the market; in 2010 it was 67%, and in 2017 it was 82%. Interestingly, in the workstation market-which grew from 900,000 units in 2000 to 4.4 million in 2017-the top five have always been dominant with a greater than 99% market share in each period.

Another trend inside each category that’s viewable at each year is the evolution of average selling prices. At a worldwide level, the average selling price of a notebook in 2000 was $2,176; in 2010 it declined to $739, and by 2017 it increased to $755. During those same time periods, the desktop went from $1,074 to $532, to $556. Workstations were the only ASP that continued to decline, dropping from $3,862 to $2,054 to $1,879. I’d argue consolidation itself has played a relatively minor role is the ASP increases in notebooks and desktops, as competition in the market remains fierce. The larger reason for these increases is that both companies and consumers now know that they plan to hold on to their PCs longer than they have in the past, and as a result, they’re buying up to get better quality and higher specifications.

New Entrants in the Market
All of this market consolidation might lead you to believe that today’s PC buyers have fewer choices than they did in the past. And this is true, to some extent. A consumer browsing the aisles at their local big box store or an IT buyer scanning their online options will undoubtedly notice that many of the vendors they purchased in the past are no longer available. But the interesting thing is that there are a handful of new players that have moved in, and many are putting out very good products.

There’s Google’s own Pixelbook, which demonstrates just how good the Chromebook platform can be. Microsoft continues to grow its product line, now offering notebooks and desktops in addition to its Surface detachable line, showcasing the best of Windows 10. And there is the mobile phone vendors such as Xiaomi and Huawei, each offering notebook products, with the latter in particular fielding a very good high-end product. It’s also notable that none of these vendors has targeted the high-volume, low margin area of the market. All are shipping primarily mid to high-end products in relatively low volumes.

As a result, none of these newer entrants have come close to cracking the top five in terms of shipments in the traditional PC market. But I’d argue that their presence has helped keep existing vendors motivated and has increased competition. As a result, I’d also say that the top five vendors are producing some of their best hardware in years.

As the traditional PC market decline slows and eventually stabilizes (the first quarter of 2018 was flat year over year), competition will intensify, and consolidation is likely to continue. It will be interesting to see how these newer vendors compete to grow their share, and how the old guard will fight to gobble up even more, utilizing their massive economies of scale. Regardless, the result will be a boon for buyers-especially those in the growing premium end of the market-who should continue to have plenty of good hardware options from which to choose.

Microsoft Pushes Developers to Embrace the MS Graph

Microsoft talked about a lot of interesting new technologies at this week’s Build developer conference, from artificial intelligence and machine learning to Windows PCs that work better with Android and Apple smartphones, to some smart new workflow features in Windows 10. But one of the underlying themes was the company’s push to get developers to better leverage the Microsoft Graph. This evolving technology shows immense promise and may well be the thing that keeps Microsoft front and center with consumers even as it increasingly focuses on selling commercial solutions.

Understanding the Graph
The Microsoft Graph isn’t new-it originated in 2015 within Office 365 as the Office Graph-but at Build the company did a great job of articulating what it is, and more importantly what it can do. The short version: the Graph is the API for Microsoft 365. More specifically, Microsoft showed a slide that said the Graph represents “connected data and insights that power applications. Seamless identity: Azure Active Directory sign-in across Windows, Office, and your applications. Business Data in the Graph can appear within your and our applications.”

Microsoft geared that language to its developer audience, but for end users, it means this: Whenever you use Microsoft platforms, apps, or services-or third-party apps and services designed to work with the Graph-you’ll get a better, more personalized experience. And that experience will get even better over time as Microsoft collects more data about what you use and how you use it.

The Microsoft Graph may have started with Office, but the company has rolled it out across its large and growing list of properties. Just inside Office 365 there’s SharePoint, OneDrive, Outlook, Microsoft Teams, OneNote, Planner, and Excel. Microsoft’s Azure is a cloud computing service, and Azure’s Graph-enabled Active Directory controls identity and access management within an organization. Plus, there’s Windows 10 services, as well as a long list of services under the banner of Enterprise Mobility and Security services. And now that company has rolled it into many of its own products; it is pushing its developers to begin utilizing the Graph, too.

Working Smarter, and Bridging Our Two Lives
The goal of the Microsoft Graph is to drive a truly unique experience for every user. One that recognizes the devices you use, and when you use them. One that figures out when you are your most productive and serves up the right tools at the right time to help you get things done. One that eventually predicts what you’ll need before you need it. None of it is quite as flashy as asking your digital assistant to have a conversation for you, but it’s the type of real-world advances that technology should be good at doing.

What’s also notable about the Microsoft Graph is that while it focuses almost entirely on work and productivity, these advances should help smooth friction outside of work, too. If we work smarter, perhaps we can work less. Inside this is Microsoft’s nod to the fact that while it still has consumer-focused businesses such as Xbox, most people will interact with its products in a work setting. That said, most of us have seen the lines between our work and non-work life blur, and the Graph should help drive continued and growing relevance for Microsoft as a result.

Don’t Forget Privacy
Of course, for all this to work Microsoft must collect a large amount of data about you. In a climate where people are starting to think long and hard about how much data they are willing to give up to access next-generation apps and services, this could be challenging. Which is why throughout Build Microsoft executives including CEO Satya Nadella made a point of driving home the company’s stance on data and privacy. Nadella called privacy a human right, and in discussing the Microsoft Graph both on stage and behind closed doors, executives Joe Belfiore and Kevin Gallo noted that this information ultimately belongs to the end user and it is up to Microsoft to keep it private and secure.

The privacy angle is one I expect to see Microsoft continue to push as it works to leverage the Graph in its ongoing battles with Google and Facebook. (I expect Apple will hammer home its stance on the topic at the upcoming WWDC, too.) In the meantime, it will be interesting to see if Microsoft’s developers buy into the promise of the Graph, and how long it will take for their subsequent work to come to fruition. By next year at this time, we may be hearing less about the potential of this technology, and more about end users enjoying the real-world benefits.

Don’t Call it a Comeback: Convertibles Shine in Growth-Challenged PC Market

There hasn’t been a lot of bright spots in the PC industry the last few years. With year-over-year shipment declines, things can often seem a little bleak. But one PC category enjoying strong growth is the convertible notebook. In 2017 the convertible category grew by 28% year over year at the worldwide level. Compare that to traditional notebooks that declined by nearly 4% and traditional desktops that declined by more than 6%. Even more notable: The convertible category’s five-year compound annual growth rate (CAGR) through 2017 was 72%.

Old School Form Factor
Convertible notebooks have a special hinge that let a user “convert” from a traditional clamshell orientation, rotating the screen all the way around into a tablet-like configuration. The form factor itself has been around for a very long time. The first convertibles came on the scene when touch was effectively a bolt-on feature of Windows, and as a result, they were slow to catch on with a mainstream audience.

The category saw some increased interest when Microsoft leaned into touch with the ill-fated Windows 8. However, that OS was very much a reaction to the rise of the tablet. Or, more specifically, the iPad. So while some PC vendors were experimenting with convertibles, much of the industry’s focus was on what we now call detachables, which are devices with a removable first-party keyboard. Led by Microsoft’s push into hardware with the original Surface, detachables seemed at the time a much more sensible response to the iPad. They focused on being a good tablet most of the time and could function as a notebook when you attached a keyboard. The problem with the convertibles of that time was that while they were good notebooks, they made lousy, oversized tablets.

True Believers: Lenovo and Best Buy
Between 1999 and 2006 convertibles grew from a 20K unit per year market to one that moved about 900K units. From 2006 until 2011, volumes increased to well over one million units per year. In 2012, the year Microsoft launched Windows 8, volumes dropped dramatically, down to about 800K units, before rebounding in 2013 to over 2.2 million units. Lenovo owned nearly 39% of the market that year and enjoyed a year-over-year shipment growth of 254% versus a market increase of 174%.
And while other vendors chased Microsoft’s Surface with their versions of detachables, driven in part by strong detachable forecasts by firms such as IDC, Lenovo continued to argue-both in private and in the market-that the convertible also represented a strong opportunity. In 2014, the convertible market doubled again to more than 4.8 million units (Lenovo owned 38%). The company’s market share in the category peaked at 42% in 2015 on total market shipments of over 7 million units. By this time, HP, Dell, and other PC vendors had recognized the opportunity and shifted more resources toward convertibles.

By 2017, the form factor had very much come into its own. Silicon advances meant convertibles could be increasingly thin, and as the tablet wave receded, more home and commercial PC buyers realized that the convertible’s ability to be a great notebook and a serviceable tablet was what they needed. In 2017 the total market reached 12.2 million units, led by Lenovo, HP, and Dell. (It’s worth noting that during that same year, detachables grew to 21.9M units, led by Apple, Microsoft, and Samsung.)

While Lenovo’s commitment to the convertible form factor was key, there was another instrumental player in the growth of the convertible market: Best Buy. The giant U.S. retailer showed interest in both detachables and convertibles, but saw the latter as the larger opportunity, and pushed its vendor partners to help it grow the category. Since 2014, the U.S. region has represented anywhere from 40 to 49% of the worldwide market for convertibles, and Best Buy has moved a sizeable portion of that total each year.

Strong ASPs and a Bright Future
So, in a market where most categories are trending downward, convertibles have been the rare growth story. But what makes convertibles even more interesting to the PC industry is the fact that they also tend to carry a notably higher average selling price (ASP) than most other PCs. For example, in 2017 the worldwide ASP for a convertible was $796, versus $645 for a traditional notebook and $505 for a tradition desktop. The only PC form factor with a higher ASP in 2017 was the ultraslim category at $936.

And the convertible category shows no signs of slowing down. Most of the major PC vendors continue to push new products here, and IDC shows continued strong growth throughout the five-year forecast. In fact, IDC’s latest numbers show convertible with the strongest five-year CAGR in the PC market at 10%. By 2022 the category should grow to nearly 20 million units per year.

And honestly, this may be too conservative. Many in the industry believe that at some point in the future, the bill of materials on a convertible will drop low enough to let vendors turn that all but the cheapest notebooks into convertibles. If that happens, these numbers will be much higher, although the resulting ASPs will undoubtedly be much lower. Either way, I look forward to seeing where the industry takes this form factor next.

As Consumer Virtual Reality Lags, Commercial Interest Grows

The Virtual Reality headset market has taken its fair share of lumps in the last 18 months, as the industry has struggled to find the right combination of hardware, software, and pricing to drive consumer demand. But while consumers are proving a hard sell, many in the industry have found an increasing number of companies willing and eager to try out virtual reality for a growing list of commercial use case. A recent IDC survey helps shed some light on this important trend.

Early Testing, and Key Verticals
IDC surveyed 500 U.S. IT Decision Makers between December 18, 2017, and January 19, 2018. One of the key takeaways from the survey: about 58% of IT buyers said their company was actively testing VR for use in the workplace. That number breaks down like this: More than 30% said they were in early testing stages with VR. Almost 18% said they were in the pilot stage. Close to 7% said they were in pilot stages of deployment. And about 3% said they had moved into late-stage deployment of the technology.

Early testing represents the lion’s share of that figure, and obviously, that means different things to different people. In some companies that means a full-fledged testing scenario with different types of hardware, software, and services. For another, it may well mean somebody in IT bought an HTC Vive and is playing with it. But the fact that so many companies are early testing is important to this nascent industry. That also means more than one-quarter of respondents said they were in a pilot stage or later with VR inside their company.

I have written at length in a previous column about the various industries where we see VR playing a role. In this survey, respondents from the following verticals had the highest response rates around VR: Education/Government, Transportation/Utilities/Construction/Resource, and Manufacturing. When we asked respondents who in their company was driving the move to embrace VR, IT managers were the most prominent (nearly 32%) followed by executives (28%) and line of business (nearly 17%).

Key Use Cases, and a Demand for Commercial Grade
Understanding how companies expect to use VR is another key element of the industry moving to support this trend. We asked respondents about their current primary use cases for VR, and the top three included product design, customer service, and employee training. The most interesting of those three to me is employee training. We’ve long expected VR to drive training opportunities related to the high-risk job and those related to high-dollar equipment. Think training firefighters and doctors, as well as people who utilize million-dollar machines. But VR is quickly moving beyond these types of training to include a much broader subset of employees. VR can speed knowledge transfer for everyone from retail salespersons to auto body repair specialists to new teachers. Many companies quickly realize that VR not only speeds onboarding of new employees but can be a big cost saver as well as you cut down on training-related travel and other expenses.

One of the key challenges for commercial VR rollouts to date is the simple fact that almost all of the hardware available today is distinctly consumer grade. I wrote early this year about HTC’s move to offer a more commercial-friendly product with the Vive Pro, which includes higher resolution, better ergonomics, and a will soon offer a wireless accessory to cut the tether. Beyond these types of updates, however, one of the things that commercial buyers want from their hardware is something that’s more robust, that can stand up to rough usage that would occur in a workplace. So when we asked respondents if they would be willing to pay more for commercial-grade hardware, a whopping 80% said yes, they would.

Biggest VR Roadblocks
While the survey data points to an interest in VR among many companies, clear use cases, and a willingness to pay, bringing the technology into the workplace will still face numerous roadblocks. When we asked respondents about the biggest roadblocks to VR adoption within their company, the top answers included a lack of clear use case, hardware cost, software cost, and services cost. So while many companies can see the obvious use cases for VR, many IT decision makers are clearly having some difficulty articulating these use cases within their company. Also, while many express a strong interest in paying more for commercial-grade hardware, the cost of hardware, software, and services is still a major blocker within many companies.

It’s early days for VR in commercial, and many of these roadblocks will disappear as the technology improves, use cases crystalize, pricing comes down, and the clear return on investment when using VR comes into view. In the meantime, the industry needs to move to embrace the growing demand for commercial VR, making it easier for companies ready to take the next step.

Facebook’s Oculus Go Looks Great, But Will People Buy It?

I recently had the opportunity to test Facebook’s upcoming Oculus Go virtual reality headset. Announced last year, and due to ship later this year, the announcement made waves because Facebook plans to sell the standalone headset-which works without a PC or smartphone-for $200. My hands-on testing showed a remarkably polished device that yields a very immersive experience. But Facebook has yet to articulate just how it plans to market and sell Oculus Go, so its success is far from assured.

High-quality Optics and Sound
When Facebook first announced Oculus Go, and its price point, many presumed that the device would drive a VR experience more comparable to today’s screenless viewers (such as Samsung’s Gear VR) than a high-end tethered headset such as Facebook’s own Oculus Rift. While it’s true that the hardware that constitutes Oculus Go may not measure up to spec-for-spec to high-end rigs connected to top-shelf PCs, the device itself is a testament to what’s possible when the one vendor is producing a product that tightly integrates the hardware and the software. It’s clear that Facebook and hardware partner Xiaomi has done a masterful job of tuning the software to utilize the hardware’s capabilities.

I spent about 20 minutes in the headset and was amazed at how easy it was to wear, how great the optics looked, the high quality of the integrated audio, and the functionality of the hand-held controller. I have tested most the VR hardware out there, and this was among the most immersive experiences I’ve had in VR. That’s an incredible statement when you consider the cost of the hardware, and the fact it is inherently limited to three-degrees-of-freedom motion-tracking capabilities (high-end rigs offer six degrees).

Facebook has slowly been rolling out details about the hardware inside Oculus Go, including details about the next-generation lenses that significantly improve the screen-door effect that impacts most of today’s VR experiences. The company has also talked about some of the tricks it employs to drive a higher-quality optical experience while hammering the graphics subsystem less, leading to better battery life and less comfort-destroying heat.

One of my key takeaways from the demonstration was that with the Oculus Go, Facebook had created an immensely comfortable VR headset, and I can’t overstate the importance of that. Today, even the most die-hard VR fan must contend with the fact that if they’re using a screenless viewer such as the Oculus-powered Gear VR with a Samsung smartphone, they can only do it for short periods of time before the heat emanating from the smartphone makes you want to take it off. Heat is less of an issue with tethered headsets, but the discomfort of the tether weighing down the headset means there are limits to just how much time you can spend fully immersed in those rigs, too.

But Can They Sell It?
So the Oculus Go hardware is great, and the standalone form factor drives a unique and compelling virtual reality experience. But the question remains: How is Facebook going to market and sell this device, and is there enough virtual reality content out there to get mainstream customers to lay down $200?

To date, Facebook hasn’t said much publicly about the way it intends to push Oculus Go into the market, and through which channels. The company undoubtedly learned a great deal about channels with its successes (and failures) with the Oculus Rift. The bottom line is that, for the foreseeable future people really want to try out virtual reality before they buy it. Oculus Go should be significantly easier to demonstrate in store than a complicated headset tethered to a PC, but how will Facebook incentivize the channel? What apps will it run? Who will ensure that the devices are clean and operational?

When I talk to Oculus executives, their belief that virtual reality is an important and vital technology is immediately clear. Often it feels as if they see its ascension as a certainty and just a matter of time. But for the next few years, moving virtual reality from an early adopter technology to something the average consumer will want to use is going to take herculean marketing, education, and delivery efforts. With Oculus Go, Facebook has a key piece of the puzzle: a solid standalone device at a reasonable price. Now it needs to put into place the remaining pieces to ensure a successful launch.

The Evolution of the Wearables Market

The Apple Watch had a very good 2017, with shipment volumes growing 56% year over year, catapulting the product to the top of the wearables market in terms of both shipment volumes and revenues according to IDC data. This has led many to suggest that there’s no real wearables market, just an Apple Watch market. But that’s far from the truth, as recent year-end data proves. While Apple clearly leads the market, there are plenty of other interesting developments occurring in this still-growing market.

Smart vs. Basic
Apple’s strong year helped accelerate one evolutionary change that’s occurring in the wearable market: A market-share shift toward smart wearables. A smart wearable is one that can run third-party apps; a basic wearable just runs integrated apps. Most of the original fitness trackers shipping into the market were basic wearables, whereas the Apple Watch-and its third-party-app-supporting Watch OS-entered the market as a smart wearable back in 2015.

IDC began tracking the wearable market in 2013, and during that year basic wearables constituted 83.5% of the market, with smart wearables making up the remaining 17.5%. By 2017 the smart wearable segment had grown to encompass almost 30% of the market. Market pioneer Fitbit shipped exclusively basic wearables into the market until 2016, when it launched its first smart wearables in the form of smartwatches. Later that year Fitbit bought assets from smartwatch pioneer Pebble to help accelerate its evolution. In 2017, 4.4% of the company’s total shipments fell into smart category. While Fitbit’s shift toward offering smart wearables hasn’t been without its challenges, the company is reacting to two key market forces: Consumer demand for smart watches, and the rapid average selling price decline of basic wearables.

Back in 2013, the average basic wearable sold for about $119, but as more vendors have entered the space-including a flood of China-based vendors-the average selling price has declined to $88. China’s Xiaomi, which overtook Fitbit to grab the number one spot in the basic wearable category, had an average selling price of $16 in 2017. During that same time span, the ASP for smart wearables has gone the opposite direction, increasing from $218 in 2013 to $375 in 2017.
Interestingly, while consumer demand for smart wearables has grown, the developer appetite for creating the third-party apps that define the category has seemingly declined. When Apple launched the first Apple watch, many developers rushed to put out mini-versions of their iOS apps for the phone. But the hardware and software limitations of that first device led to poor performance of those apps. Apple Watch hardware and WatchOS 4.0 offer a much-improved platform for apps, but many developers have slowed or stopped development of Apple Watch apps. It’s not clear yet whether Apple can reverse this trend, or if it’s even a priority for the company. Over at Fitbit, the company continues to work to integrate features of the Pebble smartwatch ecosystem into its smartwatch platform.

Shifting Form Factors
In addition to tracking smart versus basic wearables, IDC also captures a wide range of form factors. It’s the growth of new types of wearables that have kept the basic wearable category from ceding more share to smart wearables than it has. To date, the only wearables to qualify for the smart category have been smart watches and smart wrist bands. The broader basic wearable category, which includes basic watches and wrist bands, also includes clothing, earwear, and modular products.

Modular wearables are products that can be worn on different parts of the body depending upon accessories. Fitbit’s third shipping product was the Fitbit One, a modular product that you inserted into a clip and wore on your belt. The Misfit Shine could be worn on a strap on the wrist or ankle, or around the neck in a pendant. Back in 2013, the modular segment of the market constituted about 37% of total basic shipments; by 2017 it represented just 1.6% of total basic shipments. Basic watches and basic wristbands have seen their share of the market decline too, although not as dramatically.

The two categories of basic wearables that have seen dramatic growth are clothing and earwear. Clothing with wearable technology typically focuses on fitness or health features; earwear are products that offer wearable functionality beyond standard Bluetooth connectivity. So, for example, today IDC counts the Bose SoundSport Pulse in the wearable category because it includes heart-rate tracking features. To date, we’ve excluded the Apple AirPods from the category, but future iterations with additional functionality could change that.

In 2014, clothing represented just 0.1% of basic wearable shipments, and earwear was 0.3%. At the close of 2017, clothing represented 2.8% of total shipments (2.3M units) for a year-over-year growth rate of 79% and an average selling price of $62. Earwear increased 129% to reach 2.1% of total shipments (1.7M units) with an ASP of $198. It’s early days in these categories, and looking ahead IDC is forecasting dramatic growth for both.

A Growing Market, Not Just for Apple
The wearables market may no longer be considered the next big thing by many market watchers, but growth here continues. Between 2015 and 2016 the entire market grew by 27.3%; in 2017 that growth slowed to 7.7%, reaching 115.4M units. Wearables face new competition for share of the consumer wallet from emerging categories such as smart home and virtual reality. Some consumers have entered and already exited the market; many others are still figuring out the best way these products fit into their lives.

New technologies and capabilities will bring wearables back into the spotlight over the next few years, and I also expect them to play an increasingly important role on the commercial side of the market over time. And as I’ve noted before, I’m also convinced that wearables will play an important role in the evolution of augmented reality technologies. So, while Apple may well own a significant chunk of the wearables market for years to come, there are still plenty of opportunities for other vendors in this space, and it is much more than just an Apple Watch market.

Buzz Around Device as a Service Continues to Grow

This week Device as a Service (DaaS) pioneer HP announced it was expanding its hardware lineup. In addition to adding HP virtual reality products including workstations and headsets, the company also announced it would begin offering Apple iPhones, iPads, and Macs to its customers. It’s a bold move that reflects the intense and growing interest in this space, as well as Apple’s increasingly prominent role on the commercial side of the industry.

First Came PCaaS
IDC’s early research on PC as a Service (PCaaS) showed the immense potential around this model. It’s exciting because it is a win/win for all involved. For companies, shifting to the as a service model means no longer having to budget for giant capital outlays around hardware refreshes. As IT budgets have tightened, and companies have moved to address new challenges and opportunities around mobile, cloud, and security, device refreshes have often extended out beyond what’s reasonable. Old PCs limit productivity and represent ongoing security threats, but that’s not stopped many companies from keeping them in service for five years and more.

PCaaS lets companies pay an ongoing monthly fee that builds in a more reasonable life cycle. That fee can also include a long list of deployment and management services. In other words, companies can offload the day-to-day management of the PC from their IT to a third party. And embedded within these services are the ability of the provider to capture analytics that helps guide future hardware deployments and ensure security compliance.

PC vendors and other service providers offering PCaaS like it because it allows them to capture more services revenue, shorten product lifecycles, and smooth out the challenges associated with the historical ebb and flow of big hardware refreshes often linked to an operating system’s end of life. HP was the first major PC vendor to do a broad public push into the PCaaS space, leveraging what the company learned from its managed print services group. Lenovo has been dabbling in the space for some time but has recently become more public about its plans here. And Dell has moved aggressively into the space in the last year, announcing its intentions at the 2017 DellWorld conference. Each of the three major PC vendors brings its own set of strengths to the table in this competitive market.

Moving to DaaS
HP’s announcement about offering more than just PCs, as well as Apple devices, is important for several reasons. Chief among them is that in many markets, including the U.S. (where this is launching first), iOS has already established itself as the preferred platform in many companies. By acknowledging this, HP quickly makes its DaaS service much more interesting to companies who have shown an interest in this model, but who were reluctant to do so if it only included PCs. Second, while HP has a solid tablet business, it doesn’t have a viable phone offering today. For many companies, this would be an insurmountable blocker, but to HP’s credit, it owned this issue and went out and found the solution in Apple. It will be interesting to see if the other PC vendors eventually announce similar partnerships with age-old competitors. It’s worth noting that Dell also doesn’t have a phone offering, while Lenovo does have a phone business that includes the Moto brand.

It was also very heartening to see HP announce it would begin offering its virtual reality hardware as a service, too. Today that means the HP Z4 Workstation and the HP Windows Mixed Reality VR headset, but over time I would expect that selection to grow. As I’ve noted before, there is strong interest from companies in commercial VR. By offering the building blocks As A Service, HP enables companies to embrace this new technology without a massive capital outlay up front. I would expect to see both Dell and Lenovo, which also have VR products, to do the same in time. And while VR represents a clear near-term opportunity, Augmented Reality represents a much larger commercial opportunity long term. There’s good reason to believe that many companies will turn to AR as a Service as the primary way to deploy this technology in the future. And beyond endpoint devices such as PCs, tablets, phones, and headsets, it is reasonable to expect that over time more companies will look to leverage the As A Service model for items such as servers and storage, too.

Today just a small percentage of commercial shipments of PCs go out as part of As a Service agreement, but I expect that to ramp quickly in the next few years. The addition of phones, tablets, AR/VR headsets, and other hardware will help accelerate this shift as more companies warm to the idea. That said, this type of change doesn’t come easily within all companies, and there will likely continue to be substantial resistance inside many of them. Much of this resistance will come from IT departments who find this shift threatening. The best companies, however, will transition these IT workers away from the day-to-day grind of deployment and management of devices to higher-priority IT initiatives such as company-wide digital transformation.

At IDC we’re about to launch a new research initiative around Device as a Service, including multiple regional surveys and updated forecasts. We’ll be closely watching this shift, monitoring what works, and calling out the areas that need further refinement. Things are about to get very interesting in the DaaS space.

Apple’s Holiday Quarter iPhone Hat Trick

As is often the case, Apple was able to quiet rumors of the iPhone sky falling by announcing stellar results during its quarterly earnings call that put the company at the top of the market in terms of smartphone shipments, average selling price (ASP), and revenues. While total iPhone shipments were down slightly from the year-ago quarter, that doesn’t tell the whole story. Based on IDC’s preliminary numbers for the quarter, Apple shipped more smartphones than any other vendor at 77.3 million units (Samsung shipped 74.1 million). That’s a 1.3% Apple decline versus the broader market that slipped 6.3%. Importantly, Apple shipped these volumes while enjoying an ASP that increased by more than a $100 from the year-ago quarter, driven by strong sales of its iPhone X, 8, and 8 Plus. That means the company drove smartphone revenues of $61.6 billion, to lead the market. Not bad for a quarter when your marquee product started shipping in November.

Apple’s ASP Increases
Apple’s ability to dramatically increase its ASP year over year in a slowing market is impressive. Over the last few years, Apple has repeatedly introduced new iPhones with higher selling prices. While this usually results in an ASP spike during the initial quarter, things tend to fall back in subsequent quarters. But there’s not always a clear pattern to this rise and fall. For example, in 2017 the third fiscal quarter (second calendar quarter) was Apple’s second-highest at $703. For the total year of 2017 Apple’s ASP was $707, up from $647 for the full year 2016, which was down from the previous full-year ASP of $671.

To put Apple’s recent ASP performance into perspective, let’s look at Samsung’s numbers. While Apple out shipped the company in the holiday quarter, Samsung still shipped more smartphones in total for 2017 (317.3 million units versus Apple’s 215.8 million). But Samsung’s ASP has headed the opposite direction. Based on IDC’s data (Samsung doesn’t publicly announce units or ASPs), the company’s calendar-year third quarter ASP was $327 (Note: we don’t have 4Q ASP yet). For the first three-quarters of 2017 combined, Samsung’s smartphone ASP was $313. That’s down from $319 in 2016, and $344 in 2015. (Samsung’s ASP tends to spike in calendar Q2, around the launch of its latest flagship Galaxy S phone).

It’s worth noting that Samsung isn’t the only smartphone company with declining average selling prices, and Apple isn’t the only one with year-over-year increases. In fact, many of the top ten smartphone vendors have managed to increase their ASPs year-over-year through three quarters of 2017, but none have managed to increase so dramatically as Apple. And of course, none are operating at its scale.

Continued ASP growth?
So the question becomes, can Apple maintain or grow its iPhone ASP in 2018, or has it reached the top of the mountain? There are a number of factors to consider, including some things that are unique to this year’s market. One key question is whether everyone who wanted an iPhone X, and who could afford it, already bought it in the fourth quarter. This seems unlikely. While Apple sorted supply constraints quickly after launch, there were undoubtedly some who looked at early wait times and opted to hold off until the dust settled.

Another new wrinkle this year was Apple’s launch of three new phones instead of two. While the iPhone X has the highest ASPs, the iPhone 8 and 8 Plus also carry high prices and were a major driver in Apple’s quarterly increase. In past years Apple launched two new flagship phones, so we’re in uncharted waters with three, which means even as shipment mix shifts in subsequent quarters, the ASPs may hold nearer to the top than in the past.

Another element is Apple’s ongoing iPhone battery life public relations challenge. During the earnings call, one analyst asked Tim Cook if he felt Apple’s battery-replacement program might incentivize buyers to get a new battery for their existing phone and to hold off on buying a new iPhone. This might impact total iPhone shipments to a slight degree, but as wise folks have noted, these customers probably weren’t on the verge of buying a new top-line iPhone anyway. (Cook said Apple was more concerned about taking care of customers than worrying about its impact on future shipments.)

The bigger question for me is how Apple will price the new phones it will launch later this year. Supply-side chatter suggests that there will likely be at least one new X-class phone with a larger screen than today’s 5.8-inch product. Can Apple sell this phone at an even higher ASP than today’s iPhone X, or will it need to price this larger phone at today’s top-end and lower the price of the iterative 5.8-inch product? Also, do the 8 and 8 Plus get refreshed, or do they stay the same and see a price drop? My gut tells me the company may have maxed out its ability to raise the top-end price, but it has surprised me before so only time will tell.

The next several quarters should be instructive in this regard. If Apple’s ASPs drop significantly over the next six months, indicating a mix shift away from the top end, Apple will have a good sense of what the market will support. In the meantime, we have Samsung’s Galaxy S9 launch to watch later this month. How Samsung markets and prices this phone should be instructive, too.

HTC’s Vive Pro Targets Growing Commercial VR Market

Last year I wrote about the growing interest in virtual reality (VR) from industries such as retail, education, manufacturing, healthcare, and construction. These types of companies–and others–continues to show strong interest in the category, but some of the technical limitations and ergonomic issues with existing high-end VR hardware has been a roadblock for some. At CES, HTC announced a new version of its VR headset called the Vive Pro that addresses many of the issues commercial users have with today’s shipping headsets and positions the company well for accelerated commercial shipment growth in 2018.

Resolution Boost
I had the opportunity to demo the new Vive Pro at CES, and the resolution upgrade in the Pro is a noticeable improvement. The standard Vive offers a 3.6-inch dual OLED display with 1080 by 1200 resolution, while the Pro utilizes new 3.5-inch OLED displays with 1440 by 1600 resolution per eye. I had the opportunity to try several different applications, including a social networking app that took place inside a scene from the upcoming Ready Player One movie, and a medical training app, where another person guided me through a medical procedure. The increased resolution drives a much more immersive experience. It also makes it much easier to identify small details in the environment, as well as read text (a key for many commercial use cases).

Moving to offer improved headset resolution was a key target for hardware vendors across the VR landscape in late 2017 and headed into 2018. All of the shipping Microsoft-based mixed reality headsets from Dell, Lenovo, Acer, and HP have higher resolution, 2.9-inch LCD panels (1440 by 1440), and Samsung’s Odyssey headset utilizes what is likely the same 3.5-inch, 1440 by 1600 resolution OLEDs as the Vive Pro.

HTC hasn’t yet disclosed the minimum PC specifications required to utilize the improved resolution of the Pro best, but company executives did note that driving a higher resolution experience will likely require more PC horsepower. And, of course, the content and apps need to support the increased resolution, too.

Improved Sound, Ergonomics
In addition to the improved displays, HTC also added integrated headphones and amplifier into the Vive Pro headset. The sound is a crucial element of full immersion in virtual reality, and the lack of an integrated solution in the existing Vive was problematic. It means every time you take off the headset you have to remove and manage a set of headphones, too. While this is merely irritating to most consumer users, it’s a larger problem for commercial users who need to slip in and out of the headset often. It’s also an issue in B2C scenarios such as retail where sales associates are moving people in and out of the headset on a regular basis.

In addition to integrating the headphones, HTC also took the opportunity with the Pro to rebalance the entire headset with the goal of making it more comfortable to wear for longer periods of time. I didn’t spend enough time in the Vive Pro to decide how big an improvement this was, but any improvement is a welcome one. HTC also updated the user’s ability to readjust the headset with a new sizing dial, and there is also a setting that lets you adjust the distance of the screens from your eyes. Additional improvements include new dual microphones with noise canceling and dual front-facing cameras.

Wireless Connectivity
Probably the single biggest request from commercial buyers when it comes to VR is the ability to ditch the cables that tether the headset to the PC. While there have been third-party accessories that do this, at CES HTC announced it would ship its own Vive Wireless Adapter later this year. Based upon Intel’s WiGig technology, it utilizes the 60-GHz band. I wasn’t able to test the adapter, but HTC says it offers a high-performance, low latency experience.

Eliminating the cable addresses one of the biggest concerns that businesses have with VR: The danger of somebody tripping over the cable. Whether it is an employee or a customer, today’s tethered headsets represent a messy environment at best, and the move to wireless will help address this. Unfortunately, HTC doesn’t plan to ship the accessory standard with the Vive Pro, instead offering it as a separate upgrade for both the standard and pro versions of the headset when it ships in the third quarter of 2018. The company hasn’t set pricing yet.

One area that HTC hasn’t addressed with the Pro is the continued need for standalone sensors in the room for six-degree-of-freedom tracking. Both Vive headsets and today’s Oculus Rift use two sensors stationed in the room to do what is called outside-in tracking. The Microsoft-based products track movement using inside-out tracking integrated into the headset, which removes the need for these external sensors. The Microsoft-based products I’ve tested do this well, but many in the industry—including HTC—still consider external sensors more accurate. The Vive Pro will initially use the existing Valve-created Steam VR Tracking 1.0 software to drive the same sensors that ship with the standard Vive headset. Later this year, when Valve releases the Steam VR 2.0 Tracking software HTC will bundle new sensors to support it. The new standard will offer an expanded ten by ten-meter coverage area, as well as the ability to use up to four sensors for additional tracking.

In all, with the Vive Pro and wireless accessory, HTC has done a good job of putting together a solid new package that addresses many of the hardware hang-ups that have caused some businesses pause when considering VR deployments. HTC says the Vive Pro will ship in the first quarter of this year, but it hasn’t announced pricing yet. I look forward to seeing how developers and companies utilize this updated technology, and how competitors respond with their own new hardware in the coming months.

What Magic Leap One Tells Us About the Near-Future of AR Hardware

Mega-funded startup Magic Leap recently unveiled its planned hardware developer kit, dubbed the Magic Leap One. The googly-eyed headset drew some unkind remarks for its looks, and some of the company’s comments surrounding it were vague at best and frustrating at worst. But the One does represent a well-financed company’s vision of where augmented reality hardware is heading in 2018, so a brief dissection of its design is illustrative of some of AR’s design challenges.

Three-Pieced Kit
The Magic Leap One consists of the head-mounted goggles (Lightwear), a tethered computer (Lightpack), and a controller (Control). The look of the Lightwear goggles might not win any fashion awards, but its where we’ve long expected Magic Leap to bring its AR special sauce to the market. Specifically, the company says its lightfield photonics “generate light at different depths and blend seamlessly with natural light to produce lifelike digital objects that coexist in the real world.” To do all of this, Lightwear must not only believably create those images, but it must anchor those objects in the real world. A digital representation of a flower vase is only believable if it stays locked to the real-world table upon which you place it. To do this, the headset has numerous cameras and sensors that point outward, capturing what is happening around the wearer. In addition to capturing the environment, these sensors also play a role in capturing what the user is doing, with their head, with their hands, and with their voice. Capturing this information is hard; processing it in real time is a very heavy computing lift.

Which is why I’m very happy to see that, at least for now, Magic Leap moved at least some of that processing off the headset and into the cable-tethered Lightpack. The company says the puck-sized device has processing and graphics power that’s comparable to a notebook computer. As I’ve been studying the AR market the last several years I’ve become increasingly convinced that the best AR experiences for the foreseeable future will require head-mounted displays that utilize computing power located off the headset. That processing may be from a purpose-built unit, as is the case here, or from a more general-purpose computing device, such as a smartphone. There are numerous reasons why this off-the-head computing is necessary, but the key ones include removing the battery weight from the headset, relocating the heat-producing CPU and GPUs away from the user’s face, and repositioning the various necessary radios such as LTE and WiFi away from the head. It will be some time before the industry can address these technical issues in a form factor that’s suitable for wearing on your face. In the meantime, it is best to move them elsewhere. (Indecently, I think this is how Apple’s predicted AR glasses would work, utilizing the processing power of the iPhone in your pocket).

Finally, there’s the Control navigation. The fact that Magic Leap plans to ship its first developer kit with a handheld controller is an acknowledgment that, at least for now, hand tracking isn’t sufficient for the experience the company is trying to create with its platform. Magic Leap says Control includes six degrees of freedom (comparable to the best tethered VR setups today), as well as haptic feedback. Adding the controller to the mix increases the amount of user-interface data the setup can capture, so while holding a controller may seem initially counterintuitive to an immersive experience, it may well bring substantial benefits to the table. The one rub is in commercial use cases where the employee needs both hands to work, but that’s likely not the use case Magic Leap is targeting out of the gate with this product.

By utilizing three different devices, spread around the body, Magic Leap can disperse design challenges such as weight and heat while maximizing the ability to include all the necessary sensors, processors, and batteries. One hopes that the result is a singular cohesive experience.

Artisanal Spatial Computing
Magic Leap unveiled the One on its website and through an in-depth article by Brian Crecente on Rolling Stone’s gaming site Glixel that’s well worth the read. The company hasn’t announced a ship date or price but claims it will ship sometime in 2018. Pushed on price, Magic Leap’s founder Rony Abovitz said, “I would say we are more of a premium computing system. We are more of a premium artisanal computer.” I’m not sure what that means, but I’m guessing it’s not cheap.

In a follow-up piece, Abovitz had this to say about what to call Magic Leap’s technology: “Personally I don’t like the terms AR, VR or MR for us, so we’re calling our computers a spatial computer and our visualization a digital light field because that’s probably the most accurate description of what we do. The other terms are totally corrupted.” While I can appreciate Abovitz’s comments here, I find this line of reasoning problematic for the same reason that I continue to find Microsoft’s use of Mixed Reality instead of Augmented Reality frustrating. As an industry, at some point, we must agree on what to call things. Otherwise, it is very hard to measure a market, drive growth, and facilitate standards.

Pricing, ship dates, and naming discussions aside, what Magic Leap introduced with the One certainly looks promising. And, as noted, this a developer kit, designed to get programmers working on content for the company’s forthcoming platform. I eagerly await the opportunity to try out the hardware and look forward to seeing how this reveal impacts the decision of other AR-focused companies.

Lenovo’s Mirage Shows Both the Promise and Challenge of Consumer AR

Since the launch of Pokémon Go in 2016, and Apple and Google’s rollout of their respective augmented reality SDKs in 2017, conventional wisdom has noted that most consumers will experience AR first through the screen of their smartphone. Lenovo turned that expectation on its head with the recent launch of its Jedi Challenges product, which includes a Mirage headset into which you insert your phone. I’ve been playing with the hardware, fresh off a viewing of The Last Jedi, and while it’s a little rough around the edges and the content is very limited, there’s no denying that the experience offers a little taste of our AR-enabled future.

Setup Challenges
Lenovo, along with partner Disney, wisely chose Star Wars as the launch vehicle for this experience full well knowing that many of us would jump through quite a few hoops for the opportunity to wield a lightsaber. The $200 package includes the headset, a lightsaber controller, and a tracking beacon. The setup requires numerous steps, starting with the download of the smartphone app (I used an iPhone 8 Plus for my testing). The app walks you through each step, which includes calibrating the lightsaber and room tracking beacon, and placing the phone inside of a tray, sliding it into the headset, and connecting it via a cable. Setup was largely painless until the final steps, where I kept running into issues that required me to access the phone. The problem is, at this point in the process you’ve locked the phone inside the headset, which necessitates taking the whole thing apart again. Pro tip: Make sure your sound, whether through the phone’s speakers or Bluetooth, is set at the right level before you entomb the phone.

Once I finally got through the setup, however, things picked up. The headset is relatively comfortable, and the field of view is limited but sufficient. The opening animations looked good, and the presentation is polished. Finally, you are instructed to turn on your lightsaber. When that blade appears, you’d have to be a scruffy nerf herder not to feel a genuine pang of excitement.

I walked through the first set of training scenarios, battled a series of increasingly bold battle droids, and then faced my first “living opponent” in Darth Maul. Game mechanics are rudimentary but well-conceived, the game does a good job of keeping the motion in a well-defined space, the haptic feedback is excellent, and sounds seemed immersive and directionally accurate using my AirPods.

While playing Jedi Challenges is a hoot, from a technical standpoint the limitations of using a smartphone, a single controller, and a tracking beacon become apparent quickly to anyone who’s had the opportunity to use one of today’s high-dollar commercial AR headsets. Most notable, for me, was the fact that the lightsaber blade was often just a hair behind the movement of the hilt. Equally notable is the fact that nothing in this world is particularly tethered to the floor upon which you stand nor bounded by the physical walls of your room.

For gameplay experiences such as Jedi Challenges, these aren’t deal breakers. But for real-world AR devices must be able to detect surfaces if they’re going to inject digital objects into our physical world. It’s one of the things that Apple seems to have gotten remarkably right with the first iteration of ARKit, which manages to do the job using just one camera on the iPhone.

After about 20 mins of gameplay, I shut down the system and took the phone out of the headset and was astounded at just how warm it was running. While the Mirage headset only uses a portion of the smartphone screen, there is obviously a fair amount of graphics processing and battery usage occurring. I’ve experienced similar issues with smartphone-based virtual reality using Samsung’s Gear VR and Google’s Daydream VR, but the advantage of Mirage is that the phone isn’t radiating that heat directly onto your face. I also found the Mirage experience superior in that I could still see what was going on in the room around me, versus the total immersion required of VR, which dramatically decreases the likelihood of a lightsaber going through the family flatscreen during battle.

A View of the Future
It’s important to point out the AR issues and limitations with this product, but it’s also important not to get too hung up on them. A year ago, I expected my only day-to-day AR interactions to happen by holding my screen out in front of me. But now I can have a very basic head-mounted experience at home without having to spend thousands of dollars for a cutting-edge headset. Today the platform is quite limited, but I look forward to seeing what else Lenovo and Disney might show us through the Mirage in 2018. And I also think the product will also help drive developers’ imaginations around what’s possible with consumer AR.

In the near-term, commercial AR will continue to drive much of the headset development in the world as companies are willing to pay the high prices necessary to acquire the hardware and software needed because they see a clear path to return on investment. But with Lenovo’s Mirage and the likely stream of copycat products it will engender, it seems increasingly clear that consumer AR headsets will find their way to some mainstream users in the not-to-distant future.

The Commercial Opportunity for the Always-Connected PC

At the Always-Connected PC launch event earlier this week, Microsoft and Qualcomm seemed to focus a great deal of their attention on the consumer opportunity for these new Snapdragon-based Windows computers. While there is certainly a market for this technology among some percentage of consumers, I would argue that the larger near-term opportunity is in the commercial segment where connectivity and long battery-life drive real-world productivity gains and measurable cost benefits.

Connected Consumer?
Carolina Milanesi discussed the launch event in detail earlier this week, including some of the ongoing app issues Microsoft faces as well as the challenges associated with convincing consumers to pay for the carrier service required for an always-connected PC. Beyond these roadblocks, there’s this additional fundamental issue: Many consumers, with students being the exception, tend to use their PCs in one place: Inside their house. In other words, ultra-long battery life, and LTE connectivity are both nice to have, but not critical to a large percentage of consumer PC users.

However, for highly mobile workers, those two features are the holy grail of productivity. I travel extensively for work, and while today’s modern PCs offer substantially more battery life than ever before, I still often find myself working around my PC’s battery limitations. Sometimes it’s a 13-hour trip to Asia, where I do the important work up front, constantly eyeing the battery life indicator as it slides toward zero. Other times it’s running from one presentation to another, invariably forced to plug in before the last meeting, so the PC doesn’t die mid-presentation. The idea of a notebook that runs for 20 hours between charges is a game changer for users like me. The prospect of going days at a time between charges sounds almost too good to be true.

Likewise, there’s the issue of connectivity. Invariably somebody will point out that you can always connect to your phone as a hotspot, and yes that is an option. But it’s a task that takes time and effort to do, which can be problematic in some back-to-back meeting scenarios. And when you’re connecting like this, in an ad hoc way, everything must update at once, which means a flood of emails, etc. And tethering invariably leads to a secondary issue: Running down your smartphone battery. After years of carrying an LTE-enabled iPad, the benefits of an integrated LTE connection are quite clear to me.

Another interesting feature of these new PCs is their instant-on capability. Today’s PCs boot up and resume from sleep much faster than ever before, but they’re still far from instantaneous. The idea of a PC that wakes at the speed of a smartphone has clear productivity benefits.

Cost Savings and Challenges
So it’s clear that a subset of commercial users would embrace the opportunity to use an Always-Connected PC. Convincing their companies these devices are a cost-effective idea is the next challenge. But that’s not difficult when you can articulate the productivity advantages of outfitting high-output mobile employees with these devices. And yes, there is a monthly cost associated with connecting them to the network, but that cost can be rather quickly justified when you consider the ongoing costs many employees accrue while traveling and connecting to fee-based WiFi networks in hotels and other locations. Plus, there are the real-world security issues associated with connecting to random WiFi networks in the wild. And an LTE notebook might also drive cost savings for companies who have full-time remote employees that currently expense their home office broadband connections.

Probably the bigger challenge here is convincing old-school IT departments to try a non-Intel/non VPro-enabled Windows PC. These folks will also likely balk at the idea of Windows 10 S (the shipping OS on the initial launch devices, which is upgradeable to Windows 10 Pro). Some will also cringe when they hear that 32-bit X86 apps run via emulation (and 64-bit apps aren’t compatible). Finally—and this is the most reasonable pushback—many will need to see real-world benchmarks that prove these systems are competitive with today’s X86-based systems for the use cases in question.

While some of these IT departments will likely pilot some of these new consumer-focused products, others will undoubtedly wait until Microsoft, Qualcomm, and their hardware partners move to ship more commercial-focused products. Others will undoubtedly wait to see how commercial LTE-enabled systems based on Intel’s 8th generation processors compare to Windows on Qualcomm. And that may well be the most exciting result of the news this week. With Qualcomm focused on the Windows PC segment, AMD resurgent in the space, and Intel working hard to sustain its position, all Windows PC users—consumer and commercial—will eventually benefit, and I can’t wait to test the first systems. Likewise, it will be interesting to see the eventual response from competing platforms such as Google’s Chrome OS and Apple’s MacOS.

Samsung’s Multi-Platform Virtual Reality Push

Why limit your opportunities by choosing just one possible winner when you can back them all? That seems to be Samsung’s current operating principle when it comes to throwing its weight behind the major players in the brewing virtual reality platform wars. At this point, the company has now announced hardware that supports the VR platforms of Oculus, Google, and Microsoft. It’s certainly not the first time Samsung has hedged its platform bets, but it may be the only time it has backed three different major platforms at once.

Place Your Bets
Samsung moved fast to be the first major hardware vendor to embrace virtual reality’s move toward the mainstream. The company backed Facebook’s Oculus VR platform early with the Gear VR screenless viewer product that works with its high-end Galaxy S and Note smartphones. The company started shipping the Gear VR headset in late 2014, and through the first half of 2017 has now moved over 6 million units. Last year, it announced that its high-end phones would also support Google’s Daydream VR platform, which also utilizes a screenless viewer headset (this one made by Google). And more recently, the company announced it would support Microsoft’s Mixed Reality platform with the first OLED-based tethered VR headset on the platform called the HMD Odyssey.

Of course, Samsung is no stranger to platform battles. It was one of the first major vendors to support Google’s Chrome on notebooks, in addition to shipping Windows-based PCs. And while the company’s own Tizen OS failed to gain traction versus Google Android on phones, it has established itself as a viable alternative to Android Wear on the company’s wearable devices. Just as no large-scale manufacturer wants to become beholden to a single component manufacturer for its success, Samsung has often worked hard to ensure its fortunes never rest with a single platform provider.

That said, the company’s moves around VR have seemed even more calculated and savvy. After throwing in with Oculus early, Samsung has worked hard to incentivize developers to create apps for its own ecosystem on top of that platform. During its recent developers’ conference, Samsung released a press release noting that the Gear VR’s ecosystem includes more than 1,000 apps and 10,000 360-degree videos. The release also talked about Samsung’s own first-party apps and services. These include Samsung Internet VR (a Gear VR browser), Samsung PhoneCast VR (an app that translates 2D apps into 3D), VRB Foto (a social 360 photo sharing solution), and Gear VR Framework (an open source VR rendering engine with a Java interface for traditional Android Developers). What’s particularly notable about Samsung’s release is that while it talked at great length about Gear VR, it failed even to mention platform partner Oculus.

When Google announced its Daydream VR platform in 2016, it listed Samsung as one of the phone makers that would eventually support the platform. Eventually, Samsung did roll out updates enabling Daydream on its high-end phones. Here again, however, Samsung’s motive seems less about supporting Google’s VR ambitions and more about acquiring something it needs. Specifically, a strong partnership with Google that is proving important now as the company works to better leverage the new ARCore software developer kit to bring augmented reality to Samsung’s phones. Most Android vendors are likely to support ARCore, but Samsung’s interest runs deeper in that the company has made clear that it plans to utilize AR as part of Bixby Vision, a merging of its own smart assistant platform with its smartphone cameras to drive a new “hybrid deep learning system.”

Finally, at Microsoft’s recent Mixed Reality event in San Francisco, Samsung completed its VR triple play with the announcement that it would offer its own headset in support of the Windows 10-based platform. The Samsung product, which ships in November, is notable for its high-end design, integrated audio, and high-quality OLED displays. The result is a notably more premium product than the other Windows Mixed Reality devices. And it’s one that likely only Samsung—with its R&D budget and access to its own high-dollar display technology—could pull off. Samsung’s entry here has notably increased the level of interest in this platform, and some expect Samsun’s product to be in strong demand through the holidays.

Next: VR Content Capture
Despite now having hardware geared toward VR consumption on all three major VR content platforms, Samsung isn’t stopping there. Earlier this year it shipped its latest consumer-focused standalone 360-degree camera, the $230 Gear 360. More recently the company announced it was entering the professional 360-degree camera space with the 360 Round. The 360 Round uses 17 lenses (eight stereo pairs positioned horizontally and one lens positioned vertically) to capture 4K 3D video and spatial audio, which it can stream live. The360 Round will sell for an estimated $10,500. With these latest products, we see Samsung further flexing its design and manufacturing muscle, moving to enable yet another piece of the VR ecosystem.
\
Ultimately, if VR does take off in either the consumer or commercial spaces (or both), Samsung has positioned itself well to capture a significant amount of the hardware value generated by the technology. It now has hardware focused on both the creation and the consumption fronts. Samsung’s efforts over the years to field its own platforms, or to seek out competing platforms to protect itself from its own partners haven’t always been successful. But the company has clearly learned a great deal from these experiences. And it makes it easier to understand why the company is backing all the current VR platform players. Whoever ultimately wins, Samsung will be there to capture a portion of the revenues.

Virtual Reality’s Desktop Dalliance

The hardware landscape for virtual reality evolved dramatically in just the last few weeks, with new product announcements from Samsung, Google, and Facebook that span all the primary VR platforms. While the new hardware, and some lower pricing, should help drive consumer awareness around the technology, perhaps the most interesting development was both Microsoft and Facebook demonstrating desktop modes within their VR environments. These demonstrations showed both the promise of productivity in VR and the challenges it faces on the road to broader adoption.

Microsoft’s Mixed Reality Desktop Environment
For the last few months, Microsoft and its hardware partners have been slowly revealing more details about both the Mixed Reality headsets set to ship later this month and the upcoming Windows 10 Fall Creator Update that will roll out to users to enable support of the new hardware. At a recent event in San Francisco, Microsoft announced a new headset from Samsung that will ship in November, which joins products from HP, Lenovo, Dell, and Acer that will ship in October. During that event, Microsoft Fellow Alex Kipman gave attendees a tour of the Cliff House, the VR construct inside Windows 10 where users interact with the OS and their applications.

At the time, it seemed clear to me that one of the obvious advantages Microsoft brought to the table was the ownership of the OS. By having users move within the OS virtually, you decrease the number of times the user must jump between the 3D world of VR-based apps and the 2D world of today’s PC desktop environment. More importantly, the Cliff House also offered a productivity-focused room where you could go and set up a virtual desktop where you utilize your real-world keyboard and mouse to use traditional desktop apps. Essentially a desktop space where your monitor is as wide and as tall as you desire to make it, providing the virtual real estate for a multitasking dream (or nightmare, depending on your perspective). Microsoft noted at the time that the number of apps running in such a scenario is limited primarily by the PC graphic card’s ability to support them. I couldn’t test the actual desktop environment at that event, but it certainly looked promising.

Facebook Announces Oculus Dash
At this week’s Oculus Connect conference Facebook offered its market response to Microsoft, announcing a permanent price cut to its existing Oculus Rift product ($399), a new standalone VR product called Oculus Go ($199), and additional details about its future Rift-caliber wireless headset code-named Santa Cruz. Just as important, though, was the company’s announcements about updates to its platform (Oculus Core 2) and its primary interface mechanism (Dash) that includes a desktop environment. With these announcements, Facebook rather effectively addressed Microsoft’s perceived advantage by introducing a VR environment that appears, at least from the on-stage demos, to bring many of the same interactive features as Microsoft’s to the Oculus Rift. I wasn’t at the Facebook event and haven’t tested its desktop environment yet, either, but it also looked promising. Whether the company will be able to drive the same level of desktop performance as Microsoft, which obviously has the advantage of controlling the underlying OS, remains to be seen.

The 2D VR Conundrum
One issue that both Microsoft and Facebook face as they push forward with their desktop environment plans is the simple to note but hard to address issue that pretty much 100% of today’s productivity apps are two dimensional. The result is that when you drop into these fancy virtual reality desktops, you’re still going to be looking at a two-dimensional windowed application. And you’re going enter and manipulate data and objects using your real-world keyboard and mouse. What we’re facing here is the mother of all chicken and eggs problems: Today there are very few virtual-reality productivity apps because nobody is working in VR, but because nobody is working in VR few app developers will focus on creating such apps.

One of the primary reasons I’ve been bullish on the long-term prospects of virtual reality (and augmented reality) is that I envision a future where these technologies enable new ways of working. Up until now, humans have largely adapted to the digital tools on offer, from learning to use a qwerty keyboard and mouse to tapping on a smartphone screen filled with icons. VR and AR offer the industry the opportunity to rethink this, to define new interface modes, to create an environment where we do a better job of adapting the tool to the human, acknowledging that one size doesn’t fit all.
Facebook and Microsoft’s continued reliance on the desktop metaphor at this early stage is both completely understandable and a little frustrating. These are the first stops on what will be a long journey. Ultimately, it will be up to us as end users to help guide the platform owners and app developers toward the future we desire. I expect it to be a very interesting ride.

Apple Watch and Shipping Early Vs. Late

I’ve spent the last week using the Apple Watch, Series 3 with cellular and my experience has been quite good. Now running WatchOS 4.0 with a long list of new features, and utilizing faster silicon, it is easily one of my favorite pieces of new technology this year. I’ve used every version of the product and watched as Apple iterated on the hardware and software, and sharpened the device’s very reason for being. Which leads to this question: Was Apple better off shipping an imperfect product back in April 2015, or should it have waited until 2017 to ship this more fully-realized product?

Series 0: Underpowered and Overtaxed
The first Apple watch was a marvel of miniaturization, but a bit of a rough ride for early adopters with too-high expectations. The first-generation silicon inside often struggled to run some of Apple’s first-party apps, let alone the third party software that developers rushed to build in anticipation of Apple’s recreating the success of the original App Store. The watch was always going to be an accessory to the iPhone and not a replacement, but that first product was entirely too dependent on that wireless umbilical and watch apps themselves literally had to run on the phone. That said, Apple consistently pointed to strong consumer satisfaction with the product, and in the end, the biggest detractors were likely tech reviewers and disgruntled app developers.

At WWDC in 2015, Apple announced WatchOS 2.0, which gave developers the ability to run native apps on the watch itself and to access more of the hardware (including the Taptic Engine). This would prove to be key to a better experience, but it also meant developers who had built for the first version of the OS had to port over those early apps to run on the new platform. This was easy for some, harder for others depending on the app’s underlying architecture.
In September 2016 Apple rolled out the Series 2 and re-released the first watch as the newly-christened Series 1 with upgraded silicon. The performance was demonstrably better, and as the company moved to WatchOS 3.0, the capabilities of the product continued to improve. While the user experience continued to improve, many developers had all but abandoned their existing Apple Watch apps or plans to bring new apps to the platform. But Apple’s first-party apps continued to improve regarding functionality and performance, and my experience with the Series 2 was almost entirely positive. Over time I came to find its fitness tracking capabilities and its messaging features hugely beneficial. So much so, in fact, that I put my favorite mechanical watch in a drawer–likely for good.

Series 3 with Cellular: Indispensable
Which brings us to the current version of the watch I’m testing, the Series 3 with cellular. Unlike many early reviews, I had zero issues attaching the watch to my current data plan with Verizon. (And yes, $10 per month for data I’ve already purchased is too much.) I’ve also not encountered the captive WiFi issue, although it’s likely to come up when I travel next week. Overall, I find the new features in WatchOS 4 to be as good or better than advertised, and I’m eagerly consuming the myriad of new data points available through the updated heart-rate tracking features.

The LTE watch has 16GB of storage, which has allowed me to download numerous music albums for listening on the go, and that experience has worked without issue when using the AirPods. Siri on the Series 3 feels like a more fully-formed digital assistant here, both in terms of her upgraded ability to speak, as well as the number of useful capabilities she can now accomplish.

But without a doubt, the most compelling feature on this new watch is the cellular connectivity. Whether it’s leaving the phone locked in the car at the gym, hitting the trail with the dog, or just walking around the neighborhood with my family, the ability to leave it behind while remaining connected results in a feeling of liberation that’s downright addictive. I ran into no technical issues while sending texts and making calls directly on the watch.
When Apple’s Jeff Williams introduced the Series 3 with cellular, he said “This has been our vision from the very beginning. We believe built-in cellular makes Series 3 the ultimate expression of Apple Watch.” So if that’s true, should Apple have waited?

I don’t think so. While it was unusual to watch Apple struggle to find its footing with the first product, the company clearly learned a great deal by shipping. The product we have today likely wouldn’t exist if it didn’t ship back in 2015. As the current WiFi issue illustrates, there is only so much you can achieve behind closed doors, and eventually you must put the product into customers’ hands.

IDC estimates that Apple has shipped about 30 million watches to date. Apple itself noted during the keynote that it had moved pass Rolex to become the highest-revenue watch maker in the world. The watch isn’t going to replace the iPhone, in terms of revenues for Apple or in terms of usage for customers. But it clearly represents the next chapter in Apple’s constantly evolving story of bringing together hardware, software, and services. In fact, I’d personally written off the Apple Music service for my own personal use (I’m an audio geek who prefers the higher-bitrate Tidal). But now I plan to re-up with Apple Music when it becomes available for streaming to the watch. And I suspect I won’t be the only one to do so.

The big question now is whether app developers will return to the WatchOS platform. I suspect that many will be skeptical of the opportunity, and rightly so. This is one of the challenges associated with shipping hardware early and asking people and companies to risk their time and money to build for a new platform. Apple’s big challenge going forward will be convincing these developers to give WatchOS another spin. As others have noted, I suspect many of these next-generation WatchOS apps will exist primarily to bring new services to consumers.

The Broader Implications of Apple Watch with LTE

The iPhone X rightly garnered most of the world’s attention from Apple’s launch event this week, but the company’s announcement of a new Apple Watch Series 3 with LTE and new Watch OS 4 updates excited many of us that closely watch the wearables market. The new $400 product may not significantly change the trajectory of Apple’s near-term smartwatch growth, but several of the technical features its contains are substantial. It demonstrates Apple’s technical prowess, and some of these additions have the potential to reverberate through the tech industry and adjacent product categories.

More Tech, Same Form Factor
As Apple and other tech firms such as Samsung continue to push the boundaries of miniaturization, it is easy to take products that appear to be iterative in nature for granted. But the amount of next-generation technology that Apple crammed into the Apple Watch Series 3, expanding the form factor by a scant 0.25 MM, is quite impressive. In addition to the full LTE and UMTS cellular radio, Apple has also added a new dual-core S3 processor it claims is 70 percent faster than its predecessor, and new wireless chip (W2) that offers notably faster WiFi and Bluetooth connectivity. The Watch smartly switches from Bluetooth to cellular when you separate it from the phone and switches back when you come back into range. Apple also added a barometric altimeter that measures relative elevation and moved the device’s storage to 16GB (the non-cellular watch still has 8GB).

New Hardware, New OS
As always, Apple is launching the new hardware with a brand new operating system. Watch OS 4 has a long list of new features, but two of the most interesting to me and other health and fitness-focused users include an updated heart-rate tracking feature and new activity options. Going forward, the Apple Watch will monitor your heart rate all day long, instead of just when you start a workout. By capturing heart rate across a range of activities, from resting to walking to running and more, the watch can over time build a more accurate view of your fitness and health. Once the watch establishes your baseline, it can provide more precise fitness targets during workouts, and can help you understand how your body recovers from workouts. It can also monitor you for issues during the day. So if your heart rate is acting abnormally during rest, the watch will alert you.

Watch OS 4 also continues Apple’s tradition of bringing additional fitness options to the hardware. Among the most interesting is the new High-Intensity Interval Training workout option. During these types of exercise sessions, the users participate in different physical activities to increase and then decrease their heart rate. The current watch has no facility for capturing this increasingly popular form of exercise. The new OS also brings to market a feature Apple announced at WWDC which will let the watch talk to future fitness machines enabled with the new GymKit.

Leaving the Phone Behind
Wearable skeptics have long suggested that adding LTE will do little but decrease a device’s battery life. And frankly, anyone who thinks consumers in mass will drop their smartphones for a LTE-connected wearable are missing the point. The fact is, there are many times when leaving the house without the phone would be highly desirable. And Apple smartly made a point of enabling the ability to stream Apple Music from the watch right out of the box. That means you can connect your watch to a set of wireless AirPods and listen to tunes without the phone, too. The non-phone use cases are admittedly limited, but I’m interested in the idea that an always-connected Apple Watch might allow me and others to partially—but not entirely—disconnect from the world for parts of the day. With the phone left behind, the compulsion many of us have to constantly check email and news may diminish. But if an emergency text or call comes through, you’ll still receive it. The idea of reclaiming parts of my day, and being more present and just slightly less connected, sounds quite appealing to me.

eSim’s Big Moment
Finally, what may well end up being Apple’s most impressive technical feat: The purported seamless ability to add the watch to an existing carrier data contract and phone number. One of the biggest areas of friction for adding LTE to new devices is the fact that it typically involves a physical SIM card and a sometimes frustrating interaction with the carrier. The plan for Apple Watch is to utilize an eSIM that will negate the need for receiving a physical SIM in the mail or at a Telco-provider location. There has been some pushback on pricing, as it looks like US carriers will charge customers $10 per month. This is the same pricing as adding an LTE tablet today, and at some point, the carriers need to stop asking customers to repeatedly pay extra to access the data they’ve already purchased.
But what’s more important here is this: While the Apple Watch isn’t the first product to support eSIM, it may prove to be the most successful one to date. If this turns out to be as easy to do as Apple promises, it will open up the possibility of consumers embracing this technology going forward. That could mean an easier ramp for future cellular-connected products. We know that ARM-based, LTE-focused Windows 10 systems should appear in the market early next year. And at some point in the future, Apple may decide to address the market demand for an LTE-enabled Macbook. If Apple has figured out how to make this process less painful, it may prove to be among the more notable achievements to come from this important product launch cycle.

Google’s Augmented Reality Course Correction

Earlier this week Google made a huge strategic shift in its plans around augmented reality, announcing ARCore, a software developer kit that will enable Android-based AR apps in much the same way that Apple’s ARKit will enable AR apps in iOS. The announcement reflects a needed course correction by Google and has the potential to dramatically increase the number of consumers with access to phone-based AR around the world. It may well signify the company’s changing expectations around virtual reality, too.

Tango’s Tortured Dance
Before this announcement, Google seemed to be sticking with its pioneering but problematic Tango AR technology. Tango, which began life as Project Tango before Google moved to make it official, requires very specific, high-performance smartphone hardware components. The cost of these components and the power requirements of Tango made it very difficult for hardware vendors to build supporting smartphones, and at this writing, there are still only two shipping products. Google says it has been working on ARCore for some time, but assuming that’s true the company clearly wasn’t ready to show it at Google I/O in May. At the time, the company was still talking about Tango on stage, although the presentation was pretty lackluster, as I noted at the time.

The key difference between Tango and ARCore is that the latter will work without any special hardware. The core technology of ARCore focuses on three key areas: motion tracking, environmental tracking, and light estimation. The first uses the phone’s camera to understand where the smartphone is in the room and to measure changes as the phone moves within the space. Environmental understanding is the tracking of surfaces and objects within the space around the phone so that you can place AR objects into that space. And Light Estimation is how ARCore captures the light in an environment so that the AR objects have the right lighting, which is key to making them look more like real objects. I haven’t seen the technology in person, but the videos on Google’s site make it look pretty good.

Big Targets
Google says that ARCore will work on millions of existing smartphones, and it will roll out the technology first to the Google Pixel and Samsung S8 devices running 7.0 Nougat and above. The company says it is working with a wide range of Android smartphone vendors to enable it on their devices, and it expects to have ARCore running on hundreds of millions of phones by the end of the preview period. When Apple ships iOS 11 to its customers later this year, it will enable ARkit for 300-400 million existing iPhones and iPads. The impact of this one-two punch can’t be understated: AR is coming to the masses, and it is coming soon.

Ben Bajarin’s recent piece on this topic articulated the economics around Google and Apple’s ecosystems and the opportunity this represents for developers. What’s particularly interesting about this is that while Android obviously has a much larger installed base than Apple, ARCore will reach a significantly smaller percentage of that installed base then ARKit will within the iOS installed base. And, of course, iOS owners spend significantly more on apps than do their Android-owning counterparts.
Impact on VR?

Google has always been a company that has been willing to change direction, and even kill products when it’s clear the market isn’t responding to what’s on offer. Whether it happened before Apple announced ARKit or shortly after that, the fact that Google finally recognized that Tango wasn’t going to drive mainstream adoption of AR and moved to do something about it is commendable. The company says it learned a great deal from Tango that will make ARCore better, and I believe that. It will be interesting to see how the company leverages those learning in the market.
We should also pay close attention to how this shift impacts adjacent technologies at the company. At Google, I/O, the company’s virtual reality platform Daydream received significantly more stage time than Tango. But the company’s progress in this area, especially around Daydream-enabled smartphone launches, has been pretty slow. At its developer conference, Google said it expected to see tens of millions of Daydream-enabled smartphones in the market by the end of 2017, which is a number much smaller than its new target for ARCore. Google is a big company, and it can drive more than one smartphone initiative at once, but it’s hard not to look at ARCore and see the potential for the type of scale that Google really likes. It’s also worth noting that the Google file the blog post announcing ARCore under “Google VR” on its site.

However it all plays out, one thing is clear. Having both Apple and Google focused on Mobile AR is good for the industry, good for developers, and good for consumers. And while the early apps from both will focus on delivering augmented reality experiences to the phone screen, I continue to believe that eventually, the best experiences will appear on a set of glasses wirelessly tethered to the phone. With Google and Android now more fully in the mix, this outcome seems even more assured, and when it happens, it will do so at a much wider range of price points.

Tech Disruption of Financial Services

Technology writers love to write about disruption: When a simpler, less costly product or service disrupts an incumbent one that’s more complex and costly. One industry that’s ripe for disruption is financial services. Any non-finance expert who’s had to deal with a 401K, IRA, or other investment instrument knows that they are often outrageously complicated. And the actively managed mutual funds that tend to make up the bulk of these offerings are quite expensive, with fees of 1-2% on the dollars invested, which occur in addition to the fees of your personal advisor. In recent years, a handful of companies—often given the unfortunate label of robo advisors—have introduced technology-based products that promise to simplify the process for investors, at lower costs than traditional services. (NOTE: I’m not a financial advisor, so don’t construe the following as financial advice.)

Using Tech to Invest
Robo-advisors utilize algorithms to do a long list of things that have traditionally fallen on the investor to do themselves, or for which they’ve typically paid a human advisor to handle. Instead of using actively managed funds, that carry the aforementioned fees, they tend to use low-cost index funds or exchange-traded funds (ETFs). And instead of charging another 1-2% on top of this to manage your account, they tend to charge on average about 0.25% (some even offer to manage up to a certain dollar amount for free). For these companies, it’s it is all about scale. Here are some of the things they can do:

  • Pick the initial stocks. The most daunting task for any novice investor happens right up front when they need to pick the stocks into which to invest. Robo advisors do this for you, by asking a series of questions to ascertain your risk tolerance and goals. Most also factor in your existing assets. While many existing financial services offer similar tools, in the end, they typically still force you to choose your stocks based on the automated advice. Conversely, robo advisors handle it all, explaining what they plan to buy and why they plan to buy it. As an investor, you can then choose how much you know about your portfolio, from just a top-line number to high-level asset class descriptions to individual fund names.
  • Rebalance the portfolio. Most traditional financial services will rebalance a portfolio—moving funds from one bucket to another to stay within prescribed investment targets—on a quarterly basis at best. Because robo advisors are fully automated systems, most rebalance a portfolio any time it gets out of balance, not at a set time or date.
    Tax-lost harvesting. High-dollar investors have had access to tax-loss harvesting from traditional financial firms for years, but robo advisors make this complicated process available to the average investor. It is essentially the selling of a stock that has declined in value to offset taxes on other stock gains.

Not for Everyone
Different robo advisors offer other additional features, but the underlying theme is the same: Using technology to replace high-cost, high-human touch services with lower cost, highly automated systems. The result is that more people gain access to good investment services, and potentially they get to take home more of their investment earnings long-term.

Obviously, not everyone will be comfortable turning over their money to a highly-automated system. Many people will always want to know that there is a real-lie person watching over their money, even if that means paying more. At present, robo advisors make up an exceedingly small percentage of the funds under management in the world. But many of the large, existing financial service firms have recently moved to create similar offerings to those from VC-backed startups. Online reviews of these me-too services have been largely lukewarm, as they tend to cost more and offer fewer features, which is what commonly happens when incumbents attempt to address the entrance of disruptors. It will be very interesting to monitor the growth of robo advisors over the next couple of years, to see how much of the market they capture, how they impact the broader financial markets, and how well their customers fare over time.

The Burgeoning Commercial VR Opportunity

Consumer uptake of virtual reality might be taking longer than some pundits expected, but the technology is finding robust traction on the commercial side of things. In fact, IDC recently published a Worldwide AR/VR Spending Guide report that predicts commercial spending on hardware, software, and services related to virtual reality will surpass consumer spending on the technology this year. What makes this particularly interesting is that this commercial growth is taking off despite the dearth of commercial-focused hardware in the market.

Strong Uptake Across Numerous Verticals
May of the challenges faces VR in the consumer market, such as the high cost of hardware, the complexity of setup, and the lack of mainstream content, aren’t major issues when it comes to commercial deployments of the technology. And across many different verticals and use cases, the benefits are obvious, and the potential return on investment is clear. IDC’s research on the topic to date has explored VR in 12 different industries and across 26 different use cases. And remember: it is still early days.

Some of the most compelling industry use cases include:

  • Retail: Perhaps the most-cited use case for VR is in high-end automobile showrooms, where potential buyers can view a much wider range of car interiors and options in VR than any dealer could ever stock on the lot. In the future, you can imagine moving beyond simply kicking the tires to being able to drive the car on a virtual track. Retail use cases will expand across all types of products, and may well become one of the ways traditional brick and mortar retailers find to compete with online stores.
  • Education/Knowledge Transfer: From training firefighter to soldiers to educating engineers and school kids, VR is going to drive dramatic shifts in how people learn in the future. In the first scenario, people receive training in situations too dangerous and expensive to simulate in the real world. In the second, students gain access to brand new ways of interacting and absorbing information that is less passive and more active.

  • Manufacturing: VR is already taking off in both process and discrete manufacturing. The use cases are as varied as the collaborative, iterative process of creating products, to the training of engineers and others on how to run massive and complex manufacturing lines. The potential for VR disrupt age-old manufacturing processes—especially when combined with 3D printing—is massive.

  • Healthcare: VR will impact both the practitioners of medical care and those receiving it. On the practicing side, VR will help new doctors learn, and existing doctors see issues in new ways, pre-visualize complex procedures, and gather second opinions from remote colleagues. For patients, VR will offer the ability to better understand what’s going on in their bodies, as well as a wide range of treatment options for mental health issues.
  • Construction: VR is already in use in major construction projects around the world. From initial designs to construction, project pitches to project management, VR is enabling companies to make better buildings and to do it faster. And by pre-visualizing the exterior and interior of a building, the construction company can cut down on costly mistakes, while also allowing the building’s owner to make tweaks during construction, and not after. Eliminating skylights, changing finishes, and moving doors are much less costly if such changes are made before installation and not after.

Growth Despite Key Challenges
IDC has forecast robust growth in all the above areas, as well as a long list of others. And this growth is occurring even though a great deal of the early work here is happening on the consumer-grade hardware that’s available in the market today. Suffice to say, products designed for use by consumers aren’t rugged enough for long-term deployments in commercial settings. This lack of commercial-focused VR hardware is a clear market need the industry has failed to address so far.
Later this year I expect the launch of standalone VR products—based on reference designs from Intel and Qualcomm—to gain more traction in commercial than in consumer. While most consumers have limited need of a VR-only device, companies looking to deploy VR will find the simplicity quite appealing, especially after vendors start building robust, commercial-grade versions.

As the number of hardware options increase and more commercial-centric designs hit the market, the software and services associated with the technology will improve, too. We should also start to see the emergence of more VR standards, which will be key for long-term growth. And in the span of a few short years VR will be quite well entrenched in many of these vertical markets. This represents a large opportunity for the technology companies that service these markets, and an outsized threat to those industry verticals that fail to embrace the technology in a timely manner.

Wearables Still Have an Important Role to Play

News this week that Intel has eliminated its wearables team, which was part of its New Technologies Group, is just the latest in what seems like an ongoing drumbeat of bad news about the category. In recent times pioneering brands such as Pebble and Jawbone have departed the market, and the brand synonymous with the category, Fitbit, has faced a string of challenging quarters. But while broad interest in the category may be in a lull, I’m convinced the technology still has an important role to play going forward, across a wide range of use cases. Companies that throw in the towel now may regret doing so down the road.

Outsized Expectations
When wearables first started to gain attention in the market, too many people—industry analysts included—attached huge expectations to the category. Major brand such as Samsung entered the market with great fanfare. Google rolled out Android Gear with numerous partners. And Apple launched the Apple Watch. The hype reached a fever pitch as people and companies, looking for the next big thing after smartphones, pinned their hopes on wearables. The problem: Too many were unwilling to accept the fact that we’re not likely to see again any tech product that ships as many units, or impacts the world as greatly, as the smartphone.

And so, for many, the wearable market seems to be a bust, unable to live up to the unreasonable expectations placed upon it. However, while it is certainly true that many of the early brands that entered the market have fallen on hard times, the market itself has actually continued to grow at a reasonable clip. In 2016, IDC estimates the total wearable market, which includes everything from fitness trackers to smartwatches, smart ear-worn devices to smart clothing, grew to nearly 105M units for the year. That’s a year-over-year increase of 27%, with revenues that totaled about $16.3B. And the market will continue to grow for the foreseeable future. By 2021, IDC predicts the market will hit 240M units with revenues well north of $37B.

New Expectations, New Opportunities
While many in the industry have moved on from wearables, Apple is clearly still focused on the category. While WatchOS didn’t receive much stage time at WWDC, the company continues to build out the platform, and there is no doubt we’ll see more hardware down the road. And the watch isn’t the only body-worn product Apple has in the market. It’s AirPods, on back order since launching at the end of 2016, have quietly become many Apple fans’ favorite new product. And what makes the AirPods special isn’t just the elimination of the wires, but the custom Apple silicon on board that brings a level of interaction capabilities to the product that you can’t find with just any Bluetooth headset. The number of new features and functionality that Apple could eventually tie AirPods is sizeable. And it points to one of the luxuries of being Apple: It can play the long game. It doesn’t have to make outsized profits from Apple Watch or AirPods to keep the products in the market and ever evolving.

I’m convinced that Apple believes what I do: That there are still numerous opportunities for wearables in the markets where they currently play (Jan Dawson’s recent column illustrates this well). And perhaps more importantly, the category will play a key role in enabling other new technologies and capabilities down the road.

Near term, in addition to the evolving story around health and fitness, I expect wearables to play an increasingly important role in the areas of biometrics, security, and digital payments. Longer term, however, is where things get even more interesting, as wearables are likely to play an important role in the evolution of human to machine interfaces. I’m especially excited about the opportunities that will present themselves as augmented reality technologies come to fruition. In the future, we’re likely to interact with technology in a wide range of ways, using our eyes, our ears, our voice, and our hands. Wearables may prove to be the single best way to capture much of this input.

In other words, if you don’t believe tapping on glass is the best we can do in this regard, then you should continue to watch the wearables space. Vendors may come and go, and the volumes won’t come close to that of the smartphone, but the category may well be an important predictor of the future.

Has the Tablet’s Window of Opportunity Closed?

I’ve been testing Apple’s latest iPad Pro 10.5-inch tablet. It’s a very good piece of hardware, and when iOS 11 moves from beta to full release later this year, the software will represent a significant leap forward, too. With the launch of this product, Apple jumpstarted the debate about whether an iPad can replace a Mac or PC. But as good as the iPad Pro is, I can’t help but think that all the hand-wringing about tablets versus notebooks is just misplaced angst. Today’s users have already chosen their platform, and future generations will likely choose neither, opting instead for increasingly powerful smartphones that will usher in brand new ways of computing.

iPad Pro: Accelerated Iteration
Apple launched the first iPad in 2010, and just seven years later this new iPad Pro represents a stunning amount of product evolution. The A10X Fusion chip offers processing power on par with some PC CPUs. The 10.5-inch screen includes new True Tone and ProMotion technologies. The first calibrates the screen’s colors based on the ambient light conditions. The second ramps up or down the screen refresh rate based on the content and also makes using the optional Apple Pencil feel even more natural than before. The optional Smart Keyboard case makes it possible to bang through typing chores much faster then tapping on glass. As with previous iterations, the iPad Pro continues to offer plenty of battery life, and none of these new features diminish that. My unit includes LTE, which means unlike every PC or Mac I’ve ever owned, the iPad Pro is always connected.

The new features coming in iOS 11 are too numerous to list, but many of them are focused on making the iPad more productive. I’m running the public beta, and capabilities such as a viewable file system, support for drag and drop, and improved multitasking mean that I can accomplish more things than ever before on the iPad. But for me, it still can’t replace my notebook.

I’m a long-time tablet fan, and I use an iPad every day, usually after work, to consume content. I’m hooked, and will likely use a tablet for the rest of my days in some capacity. There are clearly many others like me, but I do wonder if the confluence of events back in 2010/2011 that caused many of us to pick up a tablet—namely a PC market that saw innovation slow to a crawl and a smartphone market made up of products with sub 4-inch screens—have now passed. With both of these challenges now addressed, where does this leave the tablet in terms of grabbing new users?

What’s Next?
Apple CEO Tim Cook has long argued that an iPad is the best computer for most people because it is less complex and therefore easier to use than even a Mac. It’s a compelling idea, but one whose window of opportunity may have already closed. Many long-time PC and Mac users may love their iPads for consuming content, but ultimately even the new iOS 11 represents too many restrictions for people who have lived with the freedom a full desktop OS. For these folks, the tablet is additive at best. And in emerging markets where the PC isn’t well-entrenched, people have already chosen the smartphone as their primary computing device. It has a large screen, plenty of compute power, and it’s always connected.

Ultimately, I expect the smartphone—or some future iteration of it—to replace both the PC and the tablet. In the near term that means phones will likely take on more desktop-like capabilities when needed, but longer term it means more fundamental changes. Eventually, augmented reality technologies will mean we’re no longer tapping on glass or staring at 5-, 10-, or 15-inch screens. Some think standalone AR devices will replace smartphones, but I tend to think the smartphone will still power most of these experiences. In fact, I’d argue that at some point smartphone screens may even begin to shrink, before they disappear altogether, replaced by accessories worn on the wrist, ears, and eyes, that serve up a wide range of augmented operating environments and experiences. In fact, Apple will likely lead this charge with iterations of today’s Apple Watch, AirPods, and its long-rumored glasses.

Of course, not everyone will be interested in embracing the smartphone as the one device to rule them all. Which means there will still be a place in the market for notebooks and tablets for years to come. And so Apple’s focus on making the iPad more capable today certainly isn’t a wasted effort. However, it is hard not to see the smartphone as the ultimate computing platform of the future.