At OC6 Facebook Pushes Virtual Reality Forward

The buzz around Virtual Reality has faded for many, but this hasn’t dampened Facebook’s enthusiasm for the technology. After spending two days at the company’s Oculus Connect 6 conference, it’s easy to see why Mark Zuckerberg and the Facebook team still see a bright future for VR. They talked about the recent success of its Oculus Quest headset and made a handful of interesting announcements around better content availability, upgrades to existing hardware, and the path to increased commercial adoption. It also announced its next attempt at a VR-based social app called Horizon. I found the company’s story around three of four of those topics compelling.

Bringing Go and Rift Content to Quest
During his opening keynote, Zuckerberg said that sales of the Oculus Quest—Facebook’s newest standalone VR headset—have exceeded the company’s expectations. That’s not a terribly meaningful metric for the outside world, but I can tell you that in IDC’s 2Q19 numbers the Quest helped Facebook capture more than 38% of a 1.5-million headset worldwide market, with astounding year-over-year growth.

Perhaps more importantly, he and other executives went on to note that people are using the new headsets more often, and for longer periods of time, than any of the other Oculus headsets. This includes the lower-cost standalone Oculus Go and the PC-tethered Oculus Rift and new Rift S. Sales of software on the Quest are also taking off, and Facebook announced that it already represents about 20% of the more than $100M in Oculus app sales to date.

To feed that growth, Facebook is taking steps to make sure that existing content for both the Go and the Rift will run on Quest in the future. When I discussed my initial experience with the Quest, I lamented a good, but not great, app selection. In addition to trying to bring more developers into the ecosystem, the company is also working to bring more existing content to the Quest.

The first step—bringing Go content, much of which began life as content for the Samsung made Oculus Gear VR—is a straightforward process, as the earlier hardware is less capable than today’s Quest. Effective September 26, Facebook has made many of the most popular Go apps available on the Quest.

Even more interesting: Facebook is working to bring popular apps that run on the Oculus Rift and Rift S to the Quest. Those apps utilize the additional horsepower of the connected PC to run on the Rift headsets, while the Quest utilizes a mobile processor with less horsepower under the hood. To address this, Facebook announced a new technology called Oculus Link which will let Quest owners connect their headset to a PC using a Type C cable to utilize the PC’s processing and graphics.

Frankly, it’s a brilliant move. I’d argue that today’s Quest is among the best VR experience in the market, but its mobile chipset means it can’t drive the same level of resolution as a tethered system. The upside is the Quest feels more immersive because you’re not constantly dealing with the PC tether. With Oculus Link, Quest owners can have the best of both worlds: The ability to play in standalone or in tethered modes. Better still, the upgrade will be free. Quest owners will need a high-end Type C cable to make it work. While many of us have these cables, few will likely have one long enough for this job. Facebook will sell a premium cable when the upgrade rolls out in 2020.

VR Without Controllers
Perhaps the most notable technical announcement at the show was that in 2020 Oculus will bring hand-tracking technology to the existing Quest headset utilizing the existing integrated cameras. That right, no additional add-on cameras or room sensors required. During the keynote, and in later track sessions, various Facebook executives lauded the numerous advantages to using hand-tracking over today’s touch controllers. Chief among them: a more frictionless input modality, improved social presence, and enhanced self-presence. Yes, when you can see your own hands—sans controllers—the entire experience is more immersive.

In addition to driving interesting new gameplay options, and to create improved social interaction opportunities, I also heard from several large companies about the advantages of hand-tracking when it comes to soft-skills enterprise training. For example, when learning how to handle challenging HR situations in VR the training is more lifelike when the trainee is not holding on to touch controllers during the exercise.
There will still be plenty of VR situations where hand-held controllers will still be necessary, or even a better experience, but bringing hand-tracking to the Quest has the potential to drive a big shift in how people use the headset. And I’m heartened to see Oculus leveraging the existing hardware and bringing it to existing customers without the need to buy new accessories. The company’s willingness to continue to iterate on the platform will make both consumer and commercial buyers more willing to invest in the Quest.

Focus on Commercial Use
Until now, I’d say that the folks at HTC’s Vive team have done a better job of telling a strong commercial story, especially as it iterated on the Vive Pro hardware to bring more training-centric capabilities. At Connect, however, Oculus talked a great deal about Oculus for Business, its commercial platform currently in beta and set to launch in November. The company is putting together a very compelling commercial story that will take advantage of the strong capabilities of the Quest hardware.

I sat in on a session where Oculus walked attendees through the steps necessary to take a company from proof of concept to pilot to deployment. I’ll write more about this in a future column, but for now, I’ll say Oculus is asking the right questions and thinking about all the right things. Perhaps the biggest challenge it faces is convincing enterprise organizations that it knows how to help deploy and manage commercial hardware and software at scale.

To that end, Facebook announced that it would have a trusted channel partner network that includes a shortlist of big companies in that space including Synnex and TechData. Perhaps just as importantly, it is also currently working on standing up an ISV (independent software vendor) program with existing companies that have experience in deploying commercial VR solutions, who have handled pilots or deployments with Global 2000 customers, and who have experience with VR training, simulation, or collaboration use cases. In other words, Facebook has realized that it doesn’t have all the answers when it comes to commercial VR and it’s wisely looking for the best partners to help it address the areas where it still needs to learn.

On the Horizon
Facebook made a lot of interesting announcements this week at Oculus Connect. And the 90-minute session by John Carmack, which including an unvarnished postmortem on what went wrong with the Gear VR, is a must-listen for anyone who follows this market. But the one announcement that failed to land with me is the one where Facebook is likely spending the most money and effort: The launch of Facebook Horizon.

Facebook describes Horizon as “an interconnected and ever-expanding social VR world where people can explore new places, play games, and build communities.” It sounds interesting, on paper. But even watching the video, which shows increasingly lifelike (although still legless) avatars interacting in a perfectly rendered digital world, I kept thinking, “but what are you going to do in there?” It’s not clear to me that Facebook knows the answer to that.

Facebook Horizon will launch into beta in 2020, and the company hasn’t said when it would roll out to a larger audience. I’ll withhold judgment on Horizon until I’ve had a chance to experience it myself. Perhaps the company will surprise me. I did, however, think it was telling that during his talk even Carmack acknowledged Facebook continues to struggle to figure out the right way to do social in VR.

In the meantime, I’m very much looking forward to seeing Facebook roll out all the other updates to Oculus Quest. More apps, support for PC tethering, and integrated hand tracking should all help drive the market forward in meaningful ways. And I’ll be watching as the company ramps up to launch Oculus for Business later this year.

Apple’s Aggressive Pricing on New Services Reflect Their Strategic Importance

Apple led off its September keynote this week by offering a closer look at its two new upcoming services, Arcade and TV+. The company invited developers on stage to show off some of the new games coming to Arcade, which launches September 19th and showed previews of some of the shows coming to TV+ when it launches November 1st. The company also announced that each service will cost $4.99 a month (after a week-long free trial) and that people who buy a new iPhone, iPad, iPod Touch, Apple TV, or Mac will get a year free of TV+. This aggressive pricing, especially around TV+, represents just how important the success of these new services is to Apple.

Apple Arcade
Apple’s new gaming service will let players access games on their iPhone, iPad, iPod Touch, Mac, and Apple TV. It includes more than 100 new games exclusive to Apple Arcade, and it will first role out Sept 19th with iOS 13. It will come to iPad OS and tvOS on Sept 30th, and to MacOS Cataline in October. Apple has also announced support for third-party controllers including Xbox Wireless Controllers with Bluetooth, PlayStation DualShock 4, and MFi game controllers.

Apple Arcade is interesting in that, unlike other gaming services on the market, Apple has focused on serving casual gamers rather than the hardcore players. This has the potential to cut both ways, as it represents a much larger total available market, but those players are—by definition—less invested in their gameplay. That said, I’ve always questioned the validity of the label casual gamer. Most people who play a game do it to win, and that can mean coming back to a game over and over until they’ve completed it. Just because it’s a “casual” game doesn’t mean people approach it casually.

That’s clearly one of Apple’s goals with Apple Arcade. To offer a wide range of top-shelf games in the hopes that a few catch on with each player. This makes the subscription cost a no-brainer to anyone who wants access to those few games to kill time while they’re waiting in line at DMV, stuck waiting for their kid’s school to let out, or sitting on a plane waiting to take off. By pricing the service at less than $5, I imagine a significant percentage of players will set it and forget it.

Apple TV+
Perhaps more surprising was the $4.99 price point for TV+. Unlike Apple Arcade, there are plenty of comparable services to Apple+, from Netflix to Hulu to Disney+ and more, and they all cost more than five bucks. Plus, as noted, when you buy new hardware from Apple, you get a year-long subscription for free. As others have noted, the free-year-with-purchase means TV+ will scale to huge numbers nearly automatically over its first year, an advantage that can’t be understated. And by pricing the service very aggressively versus its competition, Apple is hoping to bring in non-Apple hardware buyers to take a look, too.

And what they will find is a large and growing slate of all original programming served up without advertising. Apple’s early track record with original video content is spotty at best, but the company has new shows on tap with big-time Hollywood stars such as Jennifer Aniston, Steve Carell, Reece Witherspoon, Jason Mamoa, Snoopy, and even Oprah Winfrey. What it doesn’t have is a back catalog of old shows, a key aspect of the success of other streaming services.

Just like Arcade, Apple knows that subscribers to TV+ don’t have to like every show on offer, but if they love one or two, then they’re going to stick around. That said, unlike games that can be consumed in small, bite-sized bits when you have a few minutes here or there, video content requires a commitment. People will only make that commitment if the quality of the content is good, and so a lot will be riding on Apple’s first slate of shows.

Show Me the Bundles
One thing that Apple didn’t announce at this week’s event was any bundled package of services. Now that we’ve seen the pricing for all of its offerings, and added Arcade and TV+ to the roster, I’d very much like to see Apple offer customers a package inclusive of other services such as News+ and Apple Music. I’d even roll in iCloud storage. Make people a great deal on this broader set of services, and many will take the leap. This has the added benefit of getting people to use Apple services they might not have seriously considered buying individually in the past.
And the ultimate package: Apple’s full suite of services, plus hardware as a service rolled in. This could include a new iPhone, new iPad, and/or a new Mac on a regular one, two, or three-year cadence, inclusive of Apple Care support. This isn’t the type of package for everyone, but I’m convinced there is a market of “all in” Apple users that would welcome the ability to pay a single monthly fee to have everything just taken care of for them.

Bundles or not, Apple’s current announcements show just how serious the company is taking its entrance into these new service categories. Apple and Wall Street expect these services to drive real growth for the company over time.

If things go well, I expect at next year’s September event Tim Cook will spend some time talking about how many subscribers Apple’s new services have acquired. Two key things to watch for specific to TV+: How long does Apple keep the subscription at $4.99, and will it eventually turn off the free year to Apple hardware buyers? When and if those both happen, you’ll know Apple considers the service a success that can stand on its own.

Lenovo Continues Its Intelligent Transformation

When most consumers or IT buyers think about Lenovo, they likely think of a company best known for its iconic ThinkPad line of PCs. Lenovo is still very much a personal computing device company—it owned the top worldwide market share in the category in 2Q19—but over the past few years, it has moved aggressively to become more than a vendor that ships lots of PCs. It hasn’t always been a smooth ride, but the company’s most recent earnings call show its move to diversify its hardware lineup, its willingness to jump into new categories, and its focus on software and services, are all paying off.

Shifting from High Volume to High Profitability
Everyone knows the PC market has experienced a long-term decline. Today the consumer segment of the market continues to contract, while the commercial segment has largely stabilized (thanks at least in part to the upcoming end of life for Windows 7). For many years, Lenovo focused on shipping as many PCs as it could produce in a quarter, including low-priced boxes with low profit margins that helped drive up its market share.

Over the last few years, Lenovo has aggressively shifted from a focus on pure volume to profitability. It moved to embrace the growth in consumer PC gaming by launching its Legend PC gaming brand to compete with industry stalwarts such as Dell’s Alienware and ASUS’s Republic of Gamers. It refocused its attention on the small-volume-but-high-priced workstation market, with good results. And it redoubled its focus on creating thin-and-light notebooks for both consumer and business users.

The result: Over the past few years the company has, on occasion, ceded the top market-share spot to HP. However, this past quarter it not only regained the top spot, but it did so while growing revenue by 14.3% year over year. Also notable: The company has managed to spread the revenue from its PC and Smart Devices group into four nearly equal buckets across geographies, with about one quarter each in China, Asia Pacific, Americas, and EMEA, with each geography delivering more than $2B in revenue per quarter. That level of balance is hard to achieve and helps insulate the company from volatility in any one region.

Charging into New Categories
While Lenovo has smartly rebalanced its PC portfolio to focus on profitability, it’s also worked hard to fix its deficiencies in key categories and has moved aggressively into new market segments. It famously purchased Motorola from Google in hopes of becoming a major smartphone player. While that acquisition hasn’t yielded the results the company expected, its mobile business has finally reached a low-level of profitability. Motorola likely won’t ever be a top-tier vendor in terms of volumes, but Lenovo has focused its attention on key markets such as Latin America and North America with mid-priced phones it can profitably produce. It’s a strategy that Google echoed with its recent Pixel 3A launch, and it’s working for both companies.

To address the growing Data Center opportunity, Lenovo acquired IBM’s X86 server business. That acquisition has also taken time to pay off, and often rocky market conditions have made that even more difficult. However, Lenovo has made a series of bold moves—including bringing more manufacturing in-house—that has lead to strong growth over consecutive quarters. Moreover, it has positioned the company for success in the long term.

In addition to mobile and data center, Lenovo has moved aggressively into new categories. Three of note: smart home, virtual reality, and augmented reality. In smart home, Lenovo has launched everything from smart speakers to light bulbs. I have the company’s Google-Assistant based Smart Display, and it’s a well-designed, useful product. Lenovo also jumped into the VR pool with both a Windows Mixed Reality tethered product as well as a Google-Daydream based standalone headset called the Mirage Solo. The company also leveraged the consumer-focused Solo into an education product.

Perhaps the most exciting new area of focus for Lenovo is augmented reality. The company has already shipped a consumer-focused AR product (Star Wars Jedi Challenge), but the more interesting play is its upcoming ThinkReality product line. The latter includes both an enterprise headset as well as a software platform that lets companies deploy and manage headsets as well as more easily create purpose-built augmented reality apps. The ThinkReality software platform—due out later this year—is important because it represents a fundamental shift in Lenovo’s thinking as the software will even run on other company’s AR hardware (such as Microsoft’s HoloLens).

Focus on Device as a Service
During the recent earnings call, Lenovo executives noted that software and services now make up about 5% of the PCSD group’s revenue. One of the driving factors for that growth has been a strong focus on Device as a Service. I’ve written extensively about DaaS here and here, and I continue to see this as a strong opportunity in the market. As such, it was good to hear Lenovo’s corporate president and COO Gianfranco Lanci talk about the company’s focus here. Lenovo is uniquely positioned to play in this space as it has a full stack of hardware, from phones, tablets, and PCs, to servers and AR and VR headsets.

Lanci noted that in the quarter just completed DaaS contributed to the company’s software and services revenue number. More importantly, however, he said the company has a big pipeline of DaaS deals in the works that should drive growth for many quarters to come. DaaS represents a great opportunity, as it shifts the business relationship from simply selling a device to one where Lenovo provides ongoing services in exchange for recurring revenue.

The results of all of this work at Lenovo was a record-breaking revenue year. As we head into the back half of this calendar year, I will be watching carefully as the company moves to fully launch its AR platform, navigates the last official months of Windows 7 support, and addresses the ongoing challenges of the U.S and China trade war. Other tech companies’ transformations may get more press, but I’d suggest that Lenovo’s may prove to be equally instructive over time.

The Increasing Importance of Apple’s iPhone Trade-in Program

One of the many advantages Apple has versus its competitors in the smartphone market is the fact that its iPhones hold value longer than most other products in the market. As the overall market has matured, shipment growth has slowed, and prices of top-tier smartphones have increased, this advantage has become increasingly important. Apple even discussed it during its most recent quarterly earnings calls. Despite this attention, however, I believe competitors and investors still underestimate just how important the iPhone Trade-in Program is to Apple’s ability to drive new-phone purchases, secondary-market revenues, and continued installed base growth.

Driving Sales of New iPhones
During the recent earnings call, CEO Tim Cook answered a question about the effectiveness of the trade-in program in retail this way: “In retail, it was quite successful…And trade-in as a percentage of their total sales is significant, and financing is a key element of it.” A quick visit to Apple.com’s iPhone page or a carrier site such as Verizonwireless.com quickly demonstrates how crucial the trade-in programs have become: It’s the first item you’ll often see on the page.

On Apple.com if you click through to the trade-in-offer page, you’ll find that if you trade in an iPhone 8 you can get an iPhone XR for $479 (down from $749) or $20 per month with financing. Alternatively, you can get an iPhone XS for $729 (down from $999), or $31 per month. And it’s not just the latest iPhones that hold residual value. For example, the iPhone 6 will still net you $100 off those same phones. You can get similar deals at Verizon (the carrier also offers trade-ins for a handful of Samsung, Google, and LG phones). Of course, the other majors U.S. carriers are also offering comparable deals on their sites and in their stores.

These trade-in options, combined with financing, is how the industry moved to accommodate consumers when the subsidy model of mobile phone buying went away. Years ago, to capture the residual value of a phone, consumers had to resell the device themselves or trade-it-in to a third party such as Gazelle. Today, it’s pretty painless, and that removal of friction has helped make trade-ins a key element to most people’s phone-buying experience.

Driving Secondary-Market Revenues
Back in 2016, I talked about the fact that Apple was selling refurbished phones on Apple.com. At the time, the vast majority of phones Apple was selling itself were likely coming in via the iPhone Upgrade Plan that let customers turn in their iPhone for a new one every year. That program is still likely a source for refurbished phones on Apple’s site, but the dramatic increase in regular trade-ins is undoubtedly driving an increasing percentage of the volumes.

A quick spin through the refurbished phones available on Apple’s Web site is instructive. The company currently has a good selection of phones, including iPhone 7 Plus, 8, 8 Plus, and X in a range of colors. Alongside the refurbished price, Apple lists the new price, which makes it easy for buyers to see their savings. For example, a refurbished iPhone 7 Plus with 128GB of storage in Rose Gold sells for $569, or $100 less than the current new price. At the other end of the spectrum, a refurbished iPhone X with 256GB of storage in Space Gray sells for $899, or $150 off the new price.

Apple’s refurbished stock is collected, cleaned, and inspected, and each iPhone gets a new battery and shell, which makes Apple’s offerings notably better than some others out there. These expenses, combined with the acquisition cost (the trade-in value to the consumer), represent the cost to Apple to bring these refurbished products back to the market. The difference between these costs and Apple’s selling price is profit. Let’s use the iPhone X as an example. Apple offers $450 in trade-in value for this phone, pays the cost to refurbish the unit, and then resells it for $899. The company is making good money on its refurbished iPhone sales. And remember, this is the second time Apple has profited from the sale of this phone.

Positive Impact on Installed Base
The final positive aspect of Apple’s iPhone Trade-In Program and the sales of refurbished phones is that it helps to grow Apple installed base. As services become an increasingly large percentage of Apple’s revenue, I can’t overstate the importance of a strong and growing installed base for Apple. Tim Cook mentioned this on the earnings call, too.

“Installed base is a function of upgrades and the time between those,” Cook said. “It’s a function of the number of switchers coming into iOS, macOS—and so forth—tents. It’s a function of the robustness of the secondary market, which we think overwhelmingly hits an incremental customer.” He went on to note, “The secondary market is very key, and we’re doing programs et cetera to try to increase that because we think we wind up hitting a customer that we don’t hit in another way.”

In other words, every time Apple reclaims an iPhone from an existing customer who trades it in for a new one, the company not only retains that customer in the installed base, but it potentially adds a new one when it resells that refurbished phone. And because the refurbished phone costs less than a new phone, Apple is reaching more cost-constrained customers than in the past. This means there are more people in the iOS ecosystem to buy and use Apple’s current services such as iCloud Storage, Apple Pay, Apple Music, and News+, as well as upcoming services such as Apple TV Plus. These customers may not be as likely to spend freely as Apple’s traditional customers, but every new user is a potential source of additional revenue.

Bottom line, with the iPhone Trade-in Program Apple has rather masterfully addressed the inevitable challenge of a slowing smartphone market. It makes the high cost of acquiring a new iPhone more tenable, allows Apple to capture a good chunk of the residual value of selling an old iPhone, and it helps Apple to continue to build out the iOS installed base. That’s a win, win, win, and I expect to hear Apple talk even more about this going forward.

A Positive 2Q for PCs, Driven Partially by Tariff Fears

According to IDC’s preliminary results, the traditional PC market did better than expected during the second quarter of 2019. Shipments of notebooks, desktops, and workstations into the market grew about 4.7% year over year, to hit nearly 65 million units during the quarter. Typically, this would be good news for a market that has seen more than its fair share of struggles in recent years. Unfortunately, this unexpected growth wasn’t driven purely by market demand. Some of it was the result of vendors operating with a high level of fear, uncertainty, and doubt about the status of the U.S. trade war with China and the potential impact of an escalation that could lead to tariffs on finished PCs.

The Good News
The prelim results showed growth across a wide swath of countries and regions. Of note: Canada continued its 12-quarter growth streak with a whopping 11% year-over-year gain, Japan continued to grow its commercial segment, and India grew thanks in large to part a huge education tender that drove more than 1 million units during the quarter. The U.S. also returned to growth after a slow start to the year.

There were three primary positive drivers during the quarter: Shipments for the back-to-school season, an easing of supply constraints from Intel on processors, and the upcoming Windows 7 end of life. On January 14, 2020, Microsoft will no longer provide security updates or support for Win7 PCs. Unlike the Windows XP EOL, which drove massive shipment gains back in 2014, companies are much further down the road to moving off the old operating system. So while this transition drove some volumes in the second quarter, and will positively impact the second half of the year, we don’t expect to see the huge increases we saw around XP.

The Bad News
Unfortunately, at least part of the shipment volume increases in the second quarter was the direct result of the ongoing trade tensions between the U.S. and China. While this primarily impacted those two countries, the issue permeated the entire industry. In the U.S., it appears some vendors shipped higher-than-needed volumes into the channel because they feared the U.S. would implement its List 4 tariffs, which would directly impact finished PCs.

At present, we don’t have a great sense of just how much oversupply is in the system, and it certainly varies by vendor. Of course, we now know the U.S. administration didn’t implement this escalation during the quarter. However, the threat remains, and this could lead vendors to continue to overstuff channels to beat the system in the second half of the year. So we could see more artificial growth in the months to come.

In China, the impact of the tariffs has caused the opposite result: Commercial organizations are holding off on needed PC purchases due to the resulting slowdown in the Chinese economy. A large number of companies there are likely to continue to hold off on purchases while they wait to see how the rest of the trade war plays out.

Looking Ahead
Tim Bajarin recently wrote about tech manufacturing moving out of China, and that’s an ongoing topic of discussion throughout the PC market. PC vendors and their ODM partners are looking for options to move some or all of their manufacturing out of China, and some have already begun the process. These companies are looking at both new countries as well as places that were previously major producers (such as Taiwan). While many companies had been looking at moving some manufacturing out of China due to rising costs there, the trade war has forced these companies to divert resources toward speeding up this process. If a trade war escalation causes these companies to make drastic moves, the results could be a disruption in supply.

The other underlying challenge for manufacturers looking to move out of China is the fact that there’s a chance that the U.S. administration could, in turn, levy tariffs on additional countries. For example, some vendors have looked at moving some manufacturing to Mexico, only to have the administration threaten (and then shelve) tariffs on that country.

Ultimately, all of this uncertainty results in companies having to divert resources, which results in a negative drag on a PC market that’s been trying to return to sustainable growth for years. While the threat of tariffs may drive short-term shipment growth as companies try to “beat the clock” in a given quarter, oversupplying channels isn’t a sustainable business model. With no clear end to tensions between the U.S. and China in sight, expect these issues to continue to impact the PC market for the foreseeable future.

Oculus Quest: Amazing Hardware, But Still Missing Mainstream Killer App

I’ve been testing the new Oculus Quest virtual reality (VR) headset from Facebook, and I’m quite impressed by this piece of hardware. Facebook took what it learned from shipping its earlier products—the high-powered, PC-tethered Oculus Rift and the less-expensive, standalone Oculus Go—and put together a $399 standalone headset that offers a very good VR experience. It’s not perfect, but from a hardware perspective, it ticks off many of the boxes needed to help move VR toward a mainstream audience. Unfortunately, the Quest is still missing a vital piece of the puzzle needed to bring in the average consumer: mainstream content or a must-have app.

Masterful Hardware Combo

I can’t overstate this: The Quest is a truly impressive piece of hardware. Facebook has pulled tougher some of the best technologies available, within the limitations of its price point, and produced a very solid product that never feels like a compromise.

At the heart of the headset is Qualcomm’s 835 Snapdragon processor, which brings impressive mobile-computing power within a reasonable power envelope. Facebook leverages the chip to drive an OLED display that offers 1440 by 1600 resolution per eye, with a 72GHz refresh rate. The Quest utilizes inside-out tracking, which means you don’t have to set up external sensors to track the headset and the handheld touch controllers.

Setup is straightforward: Using the Oculus smartphone app, you connect the headset to your home WiFi, download any updates, connect the controllers, and begin. The process is fast, easy, and clever, teaching you what you need to know about the system while you map your space and have a little fun. I was consistently impressed by the Quest’s tracking capabilities (Facebook calls it Insight Tracking), which held up across a wide range of uses. I also like the integrated audio, which channels sound toward your ears without forcing you to wear headphones (although people nearby may not appreciate hearing what you hear).

Compared to the previous generation Oculus products, the Quest feels more polished and complete. Yes, the Rift (and now Rift S) offer higher processing and graphics performance thanks to the connected PC, but there’s just no getting around the limitations that the physical tether there represents. And, yes, the Go is lighter and a bit more comfortable to wear, but the limitations of that $199 product’s hardware and tracking quickly become evident. The Quest represents an impressive merging of next-generation technology and smart design that leads to the desired outcome: You stop thinking about the hardware and just embrace the VR experience.

Good, Not Great, App Selection

The Oculus Quest represents the best VR technology 2019 has to offer. However, once setup is complete, and it’s time to get down to the business of using VR, things aren’t as rosy. When the Quest shipped in late May Facebook said there were about 50 apps, which is notably fewer than what’s available on the long-shipping Rift. It’s hard to get a precise count inside the Quest, but there doesn’t’ appear to be a huge number more today. And, unfortunately, hardware limitations mean that many of the titles currently available on the Rift will never make their way to the Quest. Facebook tries to address this scarcity by adding apps such as Oculus TV and Oculus Gallery (which I was pleased to see found my networked PLEX app, bringing to the Quest my stored videos and photos). Of course, volume isn’t everything, but after you spend a few days inside the Quest, you can’t help but feel like there’s just not that much content to explore.

Plus, most of that content has a price tag attached to it. App developers deserved to be paid, of course, but on a new platform such as this, where users are casting about for content they want to try, the biggest points of friction are discovery and the upfront cost of the software. Facebook offers what appears to be a fairly generous return policy on some content (although the terms and conditions document is daunting), but more of this content should be free to try. HTC has cleverly addressed the challenge of making content more discoverable, while making sure to compensate developers for their work, with its VivePort Infinity services that let consumers pay a monthly or annual fee to access VR apps, games, and videos. That service is available on the Oculus Rift, but not the Quest. Facebook needs a comparable service.

The content, or lack thereof, is really where the Quest falls down. While I’m confident that there’s more on the way, I’m less confident that there’s an inbound app that will shift the Quest from a product that early adopters and some gamers will embrace, to one that mainstream buyers need to have. That’s because, despite the hardware advances, and all the talk from Facebook and others about next-generation experiences, on the consumer side of things, VR remains in a gaming and video playback rut. To date, we’ve not seen the types of apps that move these products from something a few people are willing to buy and use to one that many people are excited to try. Most expected this to be some form of social app, but the current environment around that category may delay that vision.

That said, it’s very hard to get developers creating exciting new types of apps when the hardware installed base for VR remains small. And, of course, the installed base for a standalone product such as the Quest is even lower. One hopes that this product, and those that follow, will help to address this challenge. We are coming ever-closer to the point where the VR hardware is “good enough.” We just can’t say the same about the breadth of VR experiences. Yet.

Looking beyond consumers, I do expect commercial VR buyers to embrace the Quest. As I’ve noted in the past, enterprise buyers are increasingly leveraging VR for a wide range of uses cases, led by training. The Quest’s hardware capabilities, combined with Facebook’s ongoing efforts to address commercial users pain points, should make this product quite attractive to IT buyers. And success in the commercial space may buy Facebook, and the broader VR category, the time it needs to figure out just what type of app is needed to eventually drive a compelling mainstream VR story.

Apple Shows Pro Content Creators Some Love with New Mac Pro

I attended Apple’s WWDC keynote this week and to say it was overstuffed with important announcements would be an understatement. From key updates to all of its operating systems (including the launch of iPad OS), to new developer tools such as Swift UI and ARKit 3, to new privacy-focused rollouts including Sign In with Apple, the vibe was one of a company firing on all cylinders with a real sense of confidence and even a bit of swagger. Nowhere was this more evident than in the long-awaited and symbolically important announcement of the new Mac Pro.

A Long Overdue Release
Apple’s Mac Pro has long been a favorite of professional content creators, especially those working in the fields of video editing and computer-generated imagery (CGI). However, the last major Mac Pro launch from Apple happened back in 2013, when the company rolled out the current cylindrical version of the product with a starting price of $4,000. A bold design unveiled at WWDC that year; Apple filled the product with unique technology designed to prove the company was still head of the class when it came to innovation. Unfortunately, Apple made some technology bets inside that design that failed to come to fruition, and that put it in a difficult position when it came time to refresh the product. And so, instead of major refreshes that would keep it relevant, the product saw minor speed bumps that saw it fall further behind the competition. The product languished for years, leaving many with the impression that Apple had abandoned some of its most ardent users behind.

Things got so bad that Apple took the unusual step of sitting down with a small group of technology journalists way back in April 2017 to announce a “completely rethought” version of the Mac Pro was in the works. Apple said it would ship…sometime in the future. More than two years later, Apple has finally announced the new Mac Pro, and a new high-end monitor called the Pro Display XDR both of which will ship this fall.

A True Mac Powerhouse
Apple executives left out any cheeky comments about the company’s ability to innovate and let the new Mac Pro, which starts at $5,999, speak for itself. The design returns to a more familiar tower form factor, but a highly modular one focused on accessibility, upgradeability, and—importantly—air flow. Cooling is key here as the system can support workstation-class Intel Xeon processors with up to 28 cores, up to 1.5TB of memory (via 12 DIMM slots), graphics including the 64-GB Radeon Pro Vega Duo (with two GPU chips), and a 1400W power supply.

Beyond its top-shelf industry parts, the new Mac Pro also includes a custom-designed accelerator card Apple Afterburner that ramps up the performance for video editors. Afterburner has a programmable ASIC designed to speed the conversion of native file formats and capable of decoding up to 6.3 pixels per second. Apple says the card can decode up to three streams of 8K ProRes RAW video and 12 streams of 4K ProRes RAW video in real time.

The system has a steel frame and encased in an aluminum housing that lifts complete away to give full access to the system. The aluminum housing features unique air-flow channels on the front and back that give the unit a bit of a “super cheese grater” look. Apple carries that look over to the rear of its Pro Display XR, and it’s not merely a design choice as the monitor itself has serious cooling requirements.

That’s because inside the 32-inch, 6K display Apple is using a large array of LEDs to drive an astounding 1,000 nits of full-screen brightness and 1,6000 nits of peak brightness. The display supports a P3 wide color gamut and 10-bit color for over 1 billion colors. This is a true professional-caliber monitor that Apple says can compete with other industry products that can cost upwards of $40K. The base model of the display starts at $4999, one with a special matte option will cost $5,999, each without a stand.

Apple spent a fair amount of time talking up the monitor’s new adjustable stand, but when execs unveiled that the stand would cost $999, there was an audible negative reaction among the WWDC crowd. This, to me, was among the only Apple missteps the entire keynote, and it’s really one more of perception than anything else. Apple knows that many professional content creators already have a high-dollar stand, and so the company is wisely offering the display sans stand. I’m certain that if Apple has said the display started at $5999, or you could buy it without the stand for less, nobody would have batted an eye. That said, I do find it absurd that to utilize the display with an industry standard VESA mount Apple force the purchase of a $199 VESA adapter.

Setting the Future Stage
A Mac Pro that starts at $5,999, with a display that starts at $4,999 (minus a stand), is clearly not a product for the average consumer. And that’s the point. With these new products, Apple is showing professional content creators some serious love. Back in 2017 when Apple announced plans for new Mac Pro, many of us saw that as a good sign, but as time wore on, it became concerning that it was taking so long. How hard, we wondered, is it to build a tower workstation? Apple rewarded that long wait with a true purpose-built system that should deliver world-class performance. Plus, Apple has created a design here that should allow for the type and cadence of hardware refreshes required by this segment of the market.

The other important thing Apple accomplishes with new Mac Pro is to establish a clear distinction between this product and the rest of its Mac lineup. Why is this important? Because most of us expect Apple to shift the rest of its Mac lineup over to its own A-Series, ARM-based processors at some point in the future. When that happens, and Apple talks up all the benefits of this switch, it was conceivable that many would point back to this 2019 launch and suggest that, once again, Apple had launched a Mac Pro that was out of step. However, this design—and especially the inclusion of the Apple Afterburner accelerator technology—firmly establish that no matter what comes next from Apple, the pro-centered end of its lineup will continue to offer a high-powered X86 option. For pro content creators, this is a very good thing.

Shipments and Market Share Matter; Even if Companies Say Otherwise

Apple recently stopped reporting quarterly hardware volumes in its earnings calls. Amazon has, famously, never reported its hardware numbers. Nor has Microsoft (for Surface). In fact, many companies don’t publicly state their hardware shipments, and more than a few suggest the unit number is less important than the revenue number. Obviously, revenue is hugely important, but the world still wants to know: How many did you ship?
Unit volumes are important because they help drive an industry-wide scorecard. We sum them up, and it tells us if the market is growing, flat, or declining. And it gives us important information about the status of the players inside that market and their relative position against the competition. Companies use the numbers to plan their businesses, their marketing, and even their employee bonuses.

Market research companies capture shipment volumes through different methods. At IDC, we use a very resource-intensive one that involves dozens of people across the world. It’s not a perfect system, and we occasionally make mistakes (when we do, we work to correct them). There’s been a fair amount of chatter about our numbers lately, and I thought it might be instructive to talk about our process.

Top-Down Process
IDC tracks new product shipments into the channel. Most of IDC’s tracker products publish quarterly, but the process of collection is a year-round job that we approach from the top down and the bottom up. Let’s start with the top down. Each quarter IDC reaches out to the companies we cover, and we ask for worldwide/regional/country guidance. Our worldwide team collects these numbers and distributes them to the dozens of regional and country analysts around the world. A remarkably large number of companies participate in this process, as they see the value in a third party collecting and disseminating these numbers. We look at these numbers as the starting point, not the finished product. As they say: Trust, but verify.

The process we use to verify is also the one we use to capture shipments for vendors that don’t guide us or report their numbers through earnings calls. This is a multi-pronged approach that includes our world-class supply-side team, our worldwide tracker team, and communication with IDC’s various analysts tracking component shipments.

IDC’s supply-side team resides in Taiwan, but they spend a great deal of time in China. They are in constant contact with component vendors and ODMs that are building the devices for the major vendors. Their relationships here have taken years to build and require frequent face-to-face meetings. The top-line numbers they collect, which include details such as which ODMs are building for which OEMs, deliver a critical fact-checking data point for our trackers, and help us move closers to a market total that includes smaller players (Others) that we don’t track individually.
Meanwhile, the worldwide tracker team is acquiring numerous import/export records from countries around the world. These files are expensive, big, and messy, and our team spends weeks cleaning them to get at their valuable data, which can include details such as SKU-level data and even carrier-destination for smartphones. This data is then passed along to the local analysts.

Finally, IDC’s various component-tracking analysts are collecting their information about processors, storage, memory, and more. These inputs—which obviously lag shipments of finished products—represent a third top-down data point that we use to triangulate on an industry total.

Bottom-Up Process
While the top-down processes are in motion, our regional- and country-level analysts are conducting a bottoms-up approach. One of the key steps is to reach out to the regional contacts of the vendors to ask for guidance. These calls help both IDC and the vendors track down possible internal errors in shipment distribution.

In parallel, dozens of local analysts are also accessing localized distribution data. Access to this data varies widely by country. In some places it’s a deep a well of important information, in other places it’s very basic, and in some places, it’s simply not available.

Concurrently, the local analysts are having ongoing discussions with the channel. Like distribution data, the level of inputs here can vary widely. In some places, channel relationships drive a great deal of very detailed information. In other places, the channel plays it close to the vest, and the analyst is forced to do more basic checks. In the end, the channel piece is an important part of the overall process.

Bringing It All Together
The various top-down and bottom-up processes culminate with a mad dash to input data, cross-check that data across inputs, fix mistakes, make new ones, fix those, and then QS the finished product. All to publish, typically, about eight weeks after the end of the quarter. Two weeks later, the same teams update their market forecasts. Another monumental effort, driven by a whole different set of processes.

Is the process perfect? Far from it. Do we make mistakes? Yes, but we try to acknowledge them and correct them. Different firms use different methods, but we feel ours is a good one. Sometimes that means we diverge from the pack in terms of a company’s shipments in a given quarter. If you see us doing so, it’s because we feel our process—and the information we’ve collected—has led us to a different conclusion. I should note that this process is becoming increasingly important as the secondary market for products such as high-end smartphones heats up, and a few companies drive real revenue through the sales of refurbished phones. IDC attempts to track these units in our installed base, but we work to keep secondary phone shipments out of our shipment numbers.

If a company says revenues or margin matter more than shipments, that’s not an unreasonable position to take. Especially in a slowing or declining market. However, you can bet that behind the scenes that company is still closely looking at shipment volumes and market share. In the end, markets need shipment data to track the health of their industry and the relative position of the players inside of it.

Google Tries Again With Pixel 3A

What a long, strange trip it’s been for Google’s branded smartphone the Pixel. After three bites at the apple with premium-priced products, achieving decidedly mixed product results (and outright poor market results), the company is trying again with new, mid-priced phones. The new $400 Pixel 3A and $479 3A XL look compelling, but can they change Google’s fortunes in the market?

Google’s Hardware Journey
For many years, Google worked with vendor partners to bring to market Nexus phones that highlighted what it saw as the best Android experiences. While die-hard Android fans loved the various Nexus products over the years, the phones never shipped in large volumes. In 2011 Google bought Motorola for $12.5B but continued to work with other vendors to ship Nexus phones. Three years later, Google sold Moto to Lenovo in 2014 for $2.9B. In 2016, Google launched the first Pixel phone as a premium-priced offering, shipping the 5-inch Pixel ($649) and Pixel XL ($769).

According to IDC’s Mobile Phone Tracker, Google shipped about 1.8 million units that first year (about 0.1% of the 1.5 billion-unit smartphone market). For reference, first-place Samsung shipped 311.4 million smartphone units that year; Apple shipped 215.4 million. Of course, it’s not entirely fair to compare Google’s volumes to that of Samsung and Apple, two well-established players in the space. More to the point, Google launched Pixel in a limited number of countries. However, when Google executives talked about the Pixel, they clearly saw it as a competitor to the flagship phones of Samsung, Apple, and others. However, while the company talked a good game about the Pixel, it didn’t do any of the things necessary for real success. Limited distribution, limited marketing, and limited carrier outreach. Most industry watchers assumed this was because the company was taking a slow, measured approach to the market.

In late 2017, Google launched the Pixel 2 ($649) and Pixel 2XL ($749). The new phones met with a similar market reaction, and the company also ran into issues with screen quality on the XL. While many reviewers began to credit Google for the innovative work it was doing with the Pixel camera, the company continued to limit the countries where Pixel shipped. It also often stuck with its direct-sales model. For example, it worked with just one carrier in the U.S. (Verizon). Total worldwide sales for Pixel, Pixel 2, and Pixel 2 XL reached just 3.5M units in 2017, representing 0.2% of the worldwide market.

In late 2018 Google launched the Pixel 3 ($799) and Pixel 3XL ($899), doubling down on the camera features and bringing to market the amazing Night Sight feature for low-light photography. The new phones once again shipped at premium prices, and Google continued to limit distribution and marketing. Despite the largely positive reviews, the company’s volumes in 2018 grew to just 4.6M units or 0.3% of the worldwide market. In the first quarter of 2019, a tough market all around, Pixel volumes declined year over year.

Once More, With Feeling
This week, Google announced the Pixel 3A and 3A XL. To reach the new, lower prices, Google made some pretty dramatic hardware changes, including lower-end Qualcomm processors and plastic bodies. However, as most reviewers have noted, the new phones still offer Google’s top-of-the-line camera features, which are driven largely by software and not hardware. From a device perspective, the new Pixels looks compelling at their price point.

Perhaps more importantly, Google seems intent on actually moving some units this time out. It’s working with more carrier—all four majors in the U.S.—and I’m already seeing some marketing around the phones, which is good. It’s an interesting strategy, if not completely original. The Lenovo-owned Motorola has been pushing this angle for several years, with mixed to positive results. The challenge is that the price band where the new products land—between $400 and $500—represents just over 5.5% of the worldwide market, and it’s not growing. Moreover, the volumes are even lower in the U.S. Speaking broadly, the high end and the low end is where all of the smartphone action has been. We’ll soon know if Google’s move to the middle is a smart one, or simply another misfire.

I continued to be somewhat perplexed by Google’s hardware ambitions. Many have suggested that the company continues to play in the hardware market to drive best-of-breed devices, to experiment with the intersection of hardware and software, and to keep its OEM partners on their toes. However, in 2019, with the broader smartphone market slowing or declining in many regions, this seems like folly.

Through the launch of the Pixel 3, it seemed the company might be following a similar playbook to Microsoft’s Surface portfolio, shooting to grab a sizeable chunk of the premium market. However, its marketing and distribution have guaranteed this won’t happen. And now the company is entering the shrinking mid-range space, and it’s hard to see why it would do this unless it were shooting for increased unit volumes. While it is working with more carriers in countries such as the U.S., it doesn’t appear Google is expanding distribution into more countries (at least yet).

In the end, it will be interesting to see how the new Pixel 3A phones do in the market. Regardless of the unit volumes through the rest of 2019, however, I can’t help but continue to wonder what Google’s ultimate goal is with the Pixel. Sometimes I wonder whether it knows the answer itself.

Challenging Smartphone Market Forces Tough Choices

The smartphone market had a tough 2018, and my early discussions with IDC’s supply-side team suggest that in the first quarter of 2019 things went from bad to worse, especially for some of the market leaders. This data point adds some additional context as to why we’ve seen some industry-shaking decisions made by the two market-leaders—Samsung and Apple— in the space of the last few weeks.

More Shipment Declines
At IDC we’re still in the early stages of gathering shipment data for the first quarter, and we won’t announce the preliminary numbers for another week. That said, our initial discussions suggest that the market saw some rather dramatic volume declines year over year. It appears market leaders Samsung and Apple each took a major hit, while the Chinese vendors—lead by Huawei—fared notably better. I talked about the success of Huawei, OPPO, Vivo, and Xiaomi earlier this year.

Despite its own set of well-documented challenges, Huawei continues its aggressive push to grab market share around the world. If the company’s momentum holds, it should easily move into the number two spot worldwide ahead of Apple this year, and there’s a very strong chance it could challenge Samsung at number one. In addition to increasing its overall number of smartphone shipments, Huawei has also methodically increased its volume of higher-priced, more profitable smartphones.

While Huawei remains locked out of the all-important U.S. carrier market, protecting market leaders Apple and Samsung from its incursion here, its ascension everywhere else should be keeping those companies’ executives up at night. I think this threat, combined with the rapid deceleration of the smartphone market due to extending lifecycles, forced both Samsung and Apple into making some tough choices that could significantly impact the long-term prospects for both.

Samsung’s Galaxy Fold Debacle
I’ve not had the opportunity to test the new Samsung Galaxy Fold, so I’ll withhold any comments on the viability of the product for now (although people whose opinion I trust were surely impressed by it). However, the fact that a few tech reviewers ran into serious hardware issues with early versions of the device (some self-inflicted, others not), makes it hard to see this as anything but a serious misstep from Samsung.

The question is, did the company rush the product into production? Did it do so simply to be first to market, or was it something else? There have been smart; pragmatic takes on how Samsung ended up where it did with the Fold. My take: Even though the company spent many years developing the display technology that makes the Fold possible, it did rush the final product out. If it wasn’t rushed, the implications are even more dire for the company’s product creation process. After all, it had already experienced the public relations disaster associated with exploding batteries in the Samsung Galaxy Note, and one would expect it not to repeat a similar mistake. In the end, I believe Samsung was desperate to get the Fold into the market to bolster its reputation as an innovative company and to create a strong counter-narrative to its slumping smartphone shipments.

Unfortunately, these screen issues have likely damaged the company’s ability to sell this first-generation product. It has been suggested the launch delay won’t be a long one, but if that’s the case, any fix would seem to be stop-gap at best. A design that was years in the making, that is fundamentally flawed, can’t be corrected in weeks. Samsung’s saving grace may turn out to be that it never intended to ship high volumes of the product. Samsung has shown in the past the ability to bounce back from issues such as this, but one wonders how many strikes the company will get. The bigger question is whether these issues will have a long-lasting impact on the broader category of foldables.

Apple and Qualcomm
There’s already been a great deal written about Apple and Qualcomm settling their dispute over royalties and licensing and the end of litigation between the companies. This move has already had industry-shaking ramifications, not the least of which was Intel’s decision to exit the 5G smartphone modem space (although it is up for debate which decision came first). I’d suggest, however, that Apple’s ultimate decision to settle with Qualcomm was also driven by the challenging place it finds itself in with regards to 2019 iPhone volumes. Early indications suggest Apple had a bad first quarter, and the outlook for the rest of the year isn’t great, either.

Apple won’t have a 5G iPhone ready in 2019, as up until a few weeks ago it was depending exclusively on Intel’s 2020 launch of a 5G modem. Worse yet, it seemed Intel was in real danger of failing to bring that product to market on time. Looking down the barrel of a very tough 2019 and faced with real possibility it wouldn’t be able to field a 5G product on time next year Apple made the only decision it could and settled with Qualcomm. It is hard to imagine Apple making that deal if iPhone shipments were still growing (or even if they were flat). Difficult times require tough decisions.

Bottom line, both Samsung and Apple’s decisions will have wide-ranging impacts on not just the companies themselves, but the broader smartphone industry as well. Has Samsung mortally wounded the foldable category, or only delayed its inevitability? Has Apple’s Qualcomm settlement cemented the latter’s dominant position in the industry, at least for the next few years? It will be interesting to see how both decisions, and their ultimate ramifications, play out over the course of the next few years.

Competition in PC Processor Market Heats Up

IDC’s preliminary PC shipment results for the first quarter of the year showed total volumes down about 3% year over year, which was better than we forecasted for the quarter. Strong overall commercial shipments driven by the ongoing shift to Windows 10, especially in desktops, helped offset sluggish consumer volumes. One notable drag on shipments has been the ongoing shortage of Intel processors, especially on the lower end of the market. While Intel is closing in on a fix to its supply-side issues, we’ve seen a resurgent AMD grab share from the market leader with highly competitive new products. Moreover, Qualcomm continues to iterate on ARM-based Snapdragon processors targeting the PC market. The result is a PC processor market that’s more competitive then it has been in years.

AMD Grows Its Share
AMD’s market share in PCs has risen and fallen over the years based largely on the strength of its current generation of silicon and its ability to get PC vendors to leverage the company as a secondary source to Intel. Today’s AMD is firing on cylinders with competitive products up and down its product stack, and Intel’s well-documented issues around its shift to 10nm—and the resulting supply issues—gave the company the opening it needed. And PC vendors have responded by utilizing AMD chips in an ever-widening number of systems, including commercial-focused ones. This helped AMD increase its market share in 2018, and I expect it to grab even more share in 2019.

At CES, AMD leaned into the opportunity, announcing a range of new mobile chips including Ryzen 3000 processors for ultrathin and gaming notebooks, AMD Athlon 300 Series for mainstream notebooks, and AMD A-Series for Chromebooks. The company’s move to get into Chromebooks, which has already yielded new products from HP and Acer, is of notable importance. Years ago, the first Chromebooks shipped with ARM-based processors. Intel saw the opportunity around Chromebooks and wisely moved to capture that market with its low-end processors.

The result: As Chromebook volumes have surged in the United States, primarily in K-12 Education but more recently in consumer as well, Intel has captured those significant volumes. In 2017 Chrome OS represented 32% of U.S. commercial notebook shipments and in 2018 that grew to 36%. Its share of consumer notebook shipments grew from 11.7% to 12.4% during that same time. It’s notable that despite its supply-side challenges—which specifically hit the low end of its line—Intel made sure the supply of processors for Chromebooks never faltered. Now, as Google and its partners look to expand Chromebooks into other regions, finally dedicating the resources and marketing needed to drive the same type of success they’ve had in the U.S., the opportunity will become even larger. And AMD is there to challenge Intel for a piece of the action.

Qualcomm’s Snapdragon Efforts
I’ve written about Qualcomm’s foray into PCs in the past, and more recently I’ve been using Lenovo’s Snapdragon 850-based Yoga C630 laptop and, frankly, it’s amazing. I travel frequently, and for certain users like myself the ability to be on the move using my PC all day (and frequently days) between charges, is revelatory. I dropped a Verizon SIM into the unit, and I never have to worry about dealing with connecting to my smartphone or a janky WiFi connection point to get my email. Performance is good, not great, but more than adequate for what I need from a mobile system, and Windows compatibility issues have largely disappeared.

There are plenty of people, including many participating in this market, that are highly skeptical of Qualcomm’s ability to compete against Intel and AMD in the PC space. It will continue to be an uphill battle, for sure. Up until now, there’s been very limited PC vendor support. As a result, there aren’t many options in the market for buyers, and from a market-share perspective, Qualcomm hasn’t really registered, yet. In fact, I’d argue that the vast majority of current users are likely tech analysts like myself, tech journalists testing out the systems for review, and die-hard road warriors that have specifically looking for mobile-focused systems.

That may change later this year, however, as Qualcomm launches is next generation PC-focused chip, the Snapdragon 8cx. Qualcomm says this 7nm chip will drive notably better performance than its current chips, and I’m looking forward to testing out systems that utilize it. If I have one concern, it’s that Qualcomm and its partners will over-index on performance at the cost of the power efficiency, which is what makes these systems so special.

I do expect to see more PC vendor support for this chip, resulting in more choices in the market. At which point it will be up to Qualcomm, its hardware partners, and Microsoft to tell a more compelling marketing story. Part of that story will depend upon their ability to get carriers to make it easier and potentially less costly to utilize LTE (and eventually 5G) on these systems.

Intel Isn’t Standing Still
I’ve been in the tech industry a very long time, and I’ve seen Intel struggle in the past. It’s usually during these times when the company does its best work. The supply issues should abate soon; the company says its 10nm process is back on track, and it recently hired interim CEO Bob Swan to fill the role permanently. With regards to PCs, Intel seems to be focusing much of its efforts on the high-performance area, even hiring industry analysts and tech journalists to help tell its story. That said, of late the company also seems to be shifting at least some of its focus toward telling a stronger story about using Intel products in the data center. It’s a logical choice, and Intel would argue it can do both things well. However, it is hard not to see the data center push as a hedge against possible continued share erosion on the PC side of things.

Ultimately, as we often like to say in the tech industry, all this competition is good for the market and PC buyers. Pay close attention to the product mix when we reach the back-to-school season and keep an eye on new products as we head into the holiday season. Things are about to get even more interesting, and by this time next year, we could see a substantially different competitive mix in the market.

Cloud-Streamed Gaming: More Questions Than Answers

Gaming is an ultra-hot topic in the world of tech right now, made clear by recent big announcements from both Google (Stadia) and Apple (Apple Arcade). Gaming has been one of the bright spots for a challenged PC market, with PC gamers willing to refresh hardware more often than other consumers and to spend more money every time they do it. Gamers spend money in online gaming marketplaces, in a still-thriving console gaming market, and—of course—increasingly on mobile platforms such as Android and iOS. All told, gaming drives huge profits across a wide range of companies, and that’s before we start adding up the dollars associated with eSports. One issue that’s becoming increasingly clear, however, is that in the near term the gaming market is likely to experience some growing pains as new technologies become available and long-standing business models face disruption.

Cloud Gaming, or Cloud-Streamed Gaming?
At IDC we’re about to embark on a very ambitious gaming survey and forecast project. We plan to run surveys in five countries (U.S., China, Brazil, Germany, and Russia), capturing responses from more than 12,000 respondents, including hardcore and casual gamers as well as dabblers and nongamers. Our goal: to better understand consumer sentiment around everything from hardware and brand loyalty to device refresh rates to spending on software, services, and accessories. We’re also devoting an entire section on cloud-streamed gaming. One of the most challenging things to do in any consumer survey is to ask respondents about technologies or services that are not yet widely shipping (or understood). As we’ve been building our survey, we’ve had some spirited internal debates about the nature of cloud gaming versus cloud-streamed gaming and more.

What’s the distinction? As my colleague Lewis Ward notes, most games today have some cloud element to them. Certainly, every multiplayer console, mobile, or PC game that lets us play against friends, family, and strangers all over the world falls into this camp. Apple made a point of saying that Apple Arcade will let you play offline, a dig at Google’s online-only Stadia. However, in the next breath, Apple points out that you can jump from iPhone to iPad, Mac, and Apple TV, clearly utilizing a cloud component. And I’m guessing some of those new iOS-only games will have multiplayer modes.

So, essentially, we already live in a cloud gaming world. Stadia, however, is clearly cloud-streamed gaming, as subscribers will be able to play games across a wide range of devices (as long as they’re online and support a Chrome browser). As Ben Bajarin notes, other cloud-streamed services from companies such as Sony and nVidia are even more cross-platform friendly. We’re still waiting to see what other big players, such as Microsoft, will offer. Bottom line, however, is that the experience when it comes to a cloud-streaming gaming service will be highly depend upon that network connection.

And it’s here where Google left us with more questions than answers, at least for now. The company didn’t talk in its announcement about specific home broadband requirements or address network latency, which is what will dictate the quality of the experience on Stadia. It also didn’t talk about pricing tiers (or pricing at all). Will games streamed at 4K cost more than 1080P games? Will everyone get 60 frames per second?

At least one promise of cloud-streamed gaming is that with the CPU and GPU in the cloud, gamers can meet on an even playing field, regardless of the device upon which they are playing. Moreover, it means players can play games across their devices, instead of having games they can only play on PC, on console, or on their mobile device. In theory, it’s a very compelling offering, but as with most thing in technology, the real value won’t be apparent until we see the execution.

Impact on PC and Console Gaming Markets
If cloud-streamed gaming delivers on its promise (or perhaps the right question is not if, but when) and it removes the local computing power from the gaming equation, what will the impact be on the hardware vendors that sell high-end gaming PCs, CPUs, GPUS, and more? I know at least one prominent PC gaming executive thinks cloud-streamed gaming will never impact his business and he argues that some people will always want a high-powered gaming rig to play on. To date, this has certainly been true. Hardcore gamers will spend big bucks to gain frame rate advantages that can mean the difference between life and death in a game. Frankly, that element of PC gaming has always bothered me a bit, as it effectively means that those with deeper pockets enjoy often significant in-game advantages.

However, it’s this desire to have the best that makes the PC gaming hardware market so appealing to vendors. Moreover, it’s this faster refresh cadence that also enables game developers to embrace next-generation technologies before any other consumer categories (ray tracing-capable graphics cards being a good current example). The PC gaming hardware market thrives in this cycle, so what happens if cloud-streamed gaming disrupts this? Where does that booming market go if in the future a person using a three-year-old $300 Chromebook has the same experience and gaming capabilities as somebody on a brand new $4,000 gaming rig?

In the end, that is the biggest question nobody can answer right now. Is cloud-streamed gaming disruptive, additive, or something else? The cost of access, the available games, the buy-in from eSports athletes, and the quality of experience will all play a role in the final answer. Regardless, it’s going to be very interesting watching it all unfold over the next few years.

HoloLens 2: Microsoft Brings Mixed Reality as a Service to Market

My fellow columnists have already written extensively about HoloLens 2 here and here, covering the product’s notably better field of view, the addition of eye and gesture tracking, its connection to Azure, and its pole position in the race to the next era of computing. It is a testament to the product that there’s still more to say. Specifically, I’ll cover the leap forward it represents in comfort (and why this is so important), the new hardware customization program, and—perhaps most importantly—the rollout of HoloLens 2 as a service.

Comfort Is Key
When Microsoft launched the first HoloLens, company executives talked a great deal about how they had scanned many, many human heads so they could design and build a headset that was comfortable to wear. After demoing that first headset numerous times, I think it’s safe to say they never scanned a noggin like mine. Regardless of how careful I was in sizing and placing the original headset on my head, within a few minutes, it was sliding forward on my face, making for an uncomfortable fit and imperfect mixed reality experience. HoloLens 2 fixes all that.

In addition to the amazing auto-calibration for pupil distance and retina log-in features, HoloLens 2 is simple to put on and go. Microsoft has shifted the balance of gravity on the device, and while it’s still far from light when you pick it up, it really does seem to float on your head once you’ve put it on. The result in a much more comfortable fit, and one that I didn’t need to constantly readjust once I put it on. The importance of this can’t be understated. Companies are going to be asking employees to put this device on and be productive. The ability to do so quickly, and the fact that it’s much more comfortable over longer periods of time, will go a long way toward encouraging those workers, even the most skeptical ones, that the effort is worthwhile.

Another notable upgrade to HoloLens 2 is the ability to flip up the front of the headset while it’s on your head. This lets the wearer have a conversation with another person without having to a) look through the lenses or b) remove the headset entirely. Microsoft took to heart feedback it received from users of the first product to implement it in the second. This is the type of iterative improvement that delights end users, and it’s the type of thing a vendor can only learn by shipping that first product.

Purpose-Built Products
One of the key elements of the HoloLens 2 launch that hasn’t been widely discussed is the rollout of the customization program. While Microsoft designed the HoloLens 2 to suite a wide variety of use cases, the company realizes that one size doesn’t fit all when it comes to mixed reality environments. The new customization program allows Microsoft’s customers and partners to create HoloLens 2 products specifically designed for their individual needs. The first product announced as part of the program was the Trimble XR10.

The XR10 integrates the HoloLens headset into a hard hat form factor, geared towards workers in construction, oil and gas, manufacturing, and mining. The company was a launch partner with Microsoft for the original HoloLens, and it has a range of applications including Tekla, SketchUp, Revit, and SysQue designed to work on the platform. Trimble says the RX10 will carry an MSRP of $4,750. That pricing, which is notably higher than the HoloLens 2 base price of $3,500, helps drive home the fact that while some pundits are still complaining about hardware cost, companies that have done the ROI work quickly come to realize the value of these products. I expect we will see additional customized HoloLens products from Microsoft customers and partners going forward.

MR as a Service
Those who read me regularly know I’m big on the concept of Device as a Service, which lets companies bundle hardware, software, and services into a multi-year contract tied to a monthly fee that eliminates the challenges associated with a huge up-front capital outlay. One of the areas where as-a-service has real legs is in the offering of VR, AR, and MR. That’s because most companies are just beginning to explore these areas, and they don’t have existing infrastructure, hardware, software, or, frankly, expertise in the area. As such, I was thrilled when Microsoft announced that in addition to the option to buy HoloLens 2 outright, it would offer the headset as part of an ongoing service.

The offering is called HoloLens 2 with Dynamics 365 Remote Assist, and it costs $125 per user, per month, for a three-year contract. Included in that fee is access to Microsoft’s see-what-I-see Remote Assist application, regular updates, plus enterprise-grade security. I hope to see Microsoft roll out additional as-a-service offerings around HoloLens in the future. In addition to making it easier for customers to get started with mixed reality, it also helps drive the important narrative that the HoloLens is ultimately more powerful when it you connect it to Microsoft’s Azure.
And that’s one of the many reasons why the HoloLens 2 is such an exciting product. With the first version of the product, Microsoft jumped out ahead of much of the industry. As a result, it took its fair share of criticism around things such as field of view, comfort, and cost, despite the fact it clearly labeled V1 a developer product. The company listened to that feedback and took its time in bringing to market the HoloLens 2, and this one is ready for prime time. I look forward to seeing all the new ways companies find to utilize this product, and the interesting new applications developers will create utilizing all the new features and capabilities.

The Battle for the Smart Home Intensifies as Amazon Buys eero

This week Amazon announced it has signed an agreement to acquire home mesh networking company eero. It’s the latest shot fired in the growing battle for dominance in the smart home and represents a savvy move by Amazon to catapult itself into a leadership position in home networking. Some of the company’s biggest competitors in the smart home market including Google and Samsung already have competing products in the space. Meanwhile, Apple recently exited the home networking space, in a move that’s looking more ever-more questionable over time.

Least Sexy; Most Important

Home routers may well be the least exciting piece of hardware anybody buys for home use. However, as the WiFi connection to your home broadband pipe, they perform a critical job that ultimately dictates the quality of experience you receive on every connected device in the house. Moreover, anyone who’s struggled to set up a router knows that it can be a challenging job that forces the consumer to deal with strange naming conventions and esoteric hardware settings.

Eero is a pioneering brand in the mesh network market. Instead of using a single router to cover your entire house, mesh networks use multiple boxes spread out around the house to ensure more robust coverage and faster speeds. In their eero piece on IDC.com, my colleagues Adam Wright and Brandon Butler note that eero owns about 15% of the total mesh WiFi market, which is expected to grow well beyond a billion dollar market in the coming years.

Products like eero not only provide better WiFi coverage inside the home, they also make the setup process dramatically easier than traditional routers using apps that walk you through the process. Perhaps the most interesting piece of this acquisition is the eero services component. Today, the company offers an eeroPlus service that includes a range of security features and add-in apps for an annual fee of $99. I can see Amazon adding more service options and features to this bundle over time, including divergent offering such as its music and video services.

Amazon’s eero Play

Amazon has said it won’t make changes to eero’s branding or operating structure after the acquisition goes through. However, you can be sure of one thing: eero is going to become the top result for all home-networking searches on Amazon.com. You’re also likely to see the company begin to push eero bundles with its long and growing list of smart home products, which includes everything from Echo speakers to smart plugs to connected cameras and Alexa-enabled microwaves. Over time, I would expect to see the eero setup app evolve to make it even easier to add these types of devices to the network. Over time, I think it’s likely will see eero hot spots integrated directly into other Amazon products, including its echo line of smart speakers.

With eero, Amazon will now have access to information about all the devices on a home network. This insight will not only help it drive new experiences on its own-branded connected devices, but it will also give it leverage to ensure a higher quality of service for those devices and the services it offers. It’s this access that has some criticizing this deal, and it will be incumbent upon Amazon to make sure that customer privacy and security remain intact.

It’s a challenge facing the other players in the home networking space, especially those who also have smart home aspirations and network-based services. Google, perhaps Amazon’s strongest competitor in the smart home space to date, has its own mesh networking product called Google WiFi. Samsung, another player in the smart home market, offers a product called SmartThings WiFi. Traditional networking company Netgear also has a very competitive product in the space called Orbi.

Apple’s Missed Opportunity

While it also has smart home aspirations, Apple is notably missing from the home networking category and its hard to see this as anything but a missed opportunity. Especially when you consider that Apple was a player in the router market for years with its AirPort, Airport Extreme, and Time Capsule WiFi routers. The company stopped refreshing its products years ago and officially exited the market as supplies dwindled in 2018. Arguably, a significant part of eero’s success came from offering an easy-to-setup experience that many reviewers said was Apple-like in its simplicity.

In addition to being able to help drive a better home-networking experience, Apple could also drive home its continuing story about privacy and security, offering a clear narrative about what it is and isn’t doing with the data collected from a home network. And this all becomes even more relevant as Apple moves aggressively toward offering more services including the widely expected video service this year.

While Apple sits this market out, Amazon, Google, Samsung and others will continue to press hard into the space, leveraging their early advantage. Expect to see eero prominently featured on Amazon’s search pages, and its services bundle to expand over time. Watch for 2019 to be an exciting year for the world’s least sexy but vitally important device category.

The Smartphone Market’s Tough Year

IDC’s preliminary data on fourth-quarter shipments prove out what we already knew to be true: The smartphone market had a down year, with total annual volumes declining 4.1% year over year to 1.4 billion units. Looking ahead, our conversations with the supply side—combined with what we know about device lifetimes continuing to extend—suggests that at a worldwide level things are likely to get worse before they get better. That said, there are still some bright spots in the market that are important to discuss, including continued growth in a few key markets.

Top Five Vendors
A look at the top five smartphone vendors shows two distinct trends: Declines from the top two vendors, and continued growth from the remaining top five. IDC estimates that for the full year Samsung shipped 292.3 million units for a market-leading share of 20.8%. That’s down 8% year over year. In second place, Apple shipped 208.8 million units for a 14.9% share and a decline of 3.2% YoY. Meanwhile, Huawei shipped 206 million units (growing 33.6% year over year), Xiaomi reached 122.6 million units (up 32.2%), and OPPO topped out at 113.1 million (up 1.3%). The rest of the market combined saw volumes decrease by 19.4% to 462 million units.

As others have noted, as the smartphone market reaches maturity it’s instructive to look back at what happened in the PC market when it peaked and then declined years ago before finally (hopefully) stabilizing today. One of the key things that happened there (and continues to happen) is market consolidation among a handful of top vendors. As you can see from the numbers, that’s happening even more rapidly in the smartphone market, where the top five commanded greater than 67% of the worldwide volumes in 2018, up from 63% a year ago. We expect this consolidation to continue in 2019.
Based on the 2019 unit volume targets we’ve seen from the supply side, we expect the bottom three vendors (or four, including number six Vivo) to continue to aggressively fight to capture more share this year. Watch for Huawei to be especially bold as it nears its goal to become the number two volume player in the world.

The China Problem and Emerging-Market Opportunities
As has been noted in a numerous earnings calls to date, the slowdown in China had a dramatically negative impact on worldwide smartphone volumes. A slowing economy, complicated by the trade war with the U.S., has only heightened the existing challenges in the smartphone market. The result was a 10% decline year over year in total smartphone volumes. Moreover, there’s little reason to believe this trend will dramatically improve in 2019, so the country is likely to be a drag on the worldwide market for the foreseeable future.

While the China market has been challenging, the top Chinese vendors have weathered the storm by increasing their domestic volumes (Huawei, OPPO, Vivo, and Xiaomi represented 78% of 2018 China volumes) and by aggressively moving into emerging markets. Many in the industry were extremely skeptical of these Chinese companies’ ability to compete and succeed against local vendors as well as an entrenched Samsung in markets such as India, Indonesia, and Vietnam. However, many have managed to thrive through brute force marketing and fast adaptation to channel and market requirements in these countries.

And there is still plenty of smartphone upside in many of these countries, as a large percentage of their installed base today continues to be feature phones. While these markets don’t support the higher ASPs of mature markets, they continue to represent a strong source of smartphone shipment volume going forward for those vendors willing to put in the time, effort, and resources needed to compete there.
A Challenging Outlook

While there are clearly some bright spots around the world, it’s not unreasonable to expect that 2019 will follow a similarly challenging trajectory for the smartphone industry. While there are some interesting new technologies inbound, including foldable phones and the first round of 5G-enabled devices, these devices are likely to carry high selling prices that will suppress their mainstream adoption for the near term. I view both technologies with a bit of apprehension. I’m concerned that carriers will confuse consumers with over-the-top 5G marketing that will lead to some early-adopter dissatisfaction with 5G in the near term, although clearly it will be a positive force in the market long term. And while I’m excited to see foldable display technology ship into the market, I’m not convinced the platforms or app ecosystems are ready to support these new products out of the gate. This too could lead to some early user frustration that could slow more mainstream adoption, although longer-term I’m excited to see what developers cook up for these form factors.

Overall, I expect the near-term narrative around the smartphone market to stay fairly negative. However, it’s important to note that this was always going to happen at some point, and with volumes topping 1.4 BILLION units in 2018, there’s no need to feel too bad for the top players in the market. The question now is how will they react? It will be very interesting to see how these vendors adjust their product portfolios, shift their marketing plans, and fine-tune their ASPs to better compete in what promises to be a much more challenging market going forward.

HTC Continues Smart, Deliberate March Forward with VR

Virtual Reality wasn’t a blockbuster attraction at this year’s CES, but there were a handful of announcements, and some of the most important of those came from HTC. While the “next big thing” buzz around VR has worn off for most, this industry pioneer keeps putting one foot in front of the other, building out its hardware, software, and services platforms and bringing to market real-world technical advances that position it well for long-term success in this market.

Eye Tracking Comes to Vive
AT CES 2018, HTC announces the Vive Pro headset. The product, which started shipping later that year, addressed one of the key pain points that both consumer early adopters and commercial users faced with the first generation Vive: A need for more resolution. The VIve Pro’s higher resolution OLED displays drive a notably better VR experience all around, but one of the key improvements was the ability to read text. This was a critical requirement of many companies looking to utilize the Vive Pro for training scenarios. At this year’s show, HTC once again announced plans for a much-requested feature: Eye tracking via the Vive Pro Eye.

There are numerous reasons why eye-tracking in VR can help to drive a better experience, the most straightforward being a simple one to explain. By tracking where a person is looking, the system need not constantly render and re-render a full virtual reality setting; it can focus on driving the most realistic view precisely where the user is looking (a process called foveated rendering). VR apps can also use eye-tracking technology to drive next-generation interfaces, where instead of selecting menu items using the click of a button on a handheld controller, the wearer makes selections with their eyes. Yet another use for the technology is to capture real-time data about what a person is looking at inside of an immersive experience.

It’s that last use case that I found particularly interesting, especially from a commercial use case perspective. At the show, I had the opportunity to try a software demonstration from the folks at Ovation that utilizes a VR setting to help people become better public speakers. The app works on prior Vive headsets, but the eye-tracking capabilities of the Vive Pro Eye significantly improve the value of the app by collecting precise data about where you’re looking during a practice presentation. The developers scanned in real people to serve as your audience, and they are constantly shifting and moving in their seats, all staring right at you. As you work your way through the presentation, the app captures how often you look around the room, how often you make eye contact with audience members, how long you stare at the teleprompter, how often (and how randomly) you move your hands, and more.

I’ve spent quite a bit of time testing out various VR apps, and I’ve seen and tested some very clever training-based products. I found the Ovation app to be stunning in the level of detail it captured, and I could see it becoming an invaluable tool in training people to become stronger public speakers. And it’s a great demonstration of the potential for eye-tracking technologies in VR. Once again, I think HTC has listened to business VR buyers and moved to bring to market something that many will embrace.

I look forward to seeing the VivePro Eye ship in 2019, and all the new use cases it will enable in the form of new apps in the future.

Other Key Announcements
In addition to Vive Pro Eye, HTC also announced plans for a new headset, called the Vive Cosmos, support for Mozilla’s Firefox Reality browser, and the launch of a new service called Viveport Infinity. HTC wasn’t ready to talk much about Cosmos, other than to say it will be another tethered headset, so the Mozilla and Infinity announcements are more interesting to me.

The Firefox Reality browser is a new browser for headsets, and HTC has announced it will support the browser across its entire hardware lineup. Why put a Web browser in virtual reality? It’s largely about letting users access 2D Web sites without having to drop out of VR, but it’s also possible to launch into full VR Web experiences. It’s good to see Vive throwing its support behind WebVR, as it could end up being one of the key ways consumers eventually embrace VR.

HTC currently offers Viveport subscription, varying in length from 3 to 6 to 12 months, and offering players full access to a limited number of apps per month, per subscription length. The service is a good one, as it allows consumers the ability to try out new VR apps without risking the full purchase price of a VR game or experience, they may not like. With Infinity, HTC will remove the limitations, allowing players to access as many titles as they like for an as-yet-undisclosed fee.

Infinity is a smart idea and one that ultimately seems to be a win/win for both consumers and developers. And it is representative of the good work HTC continues to do to build out a sustainable VR ecosystem for itself and its partners. VR didn’t capture too many headlines at this year’s show, but many in the industry continue to put in the work necessary to make it a viable business over the long haul.

Here Come the Consumer Robots

One category of products I’ll be watching closely next week in Las Vegas at the Consumer Electronics Show (CES) is consumer robots. While the commercial/industrial robot category has been growing fast (and is a category IDC is watching closely), the consumer market has been slower to take off. I expect things to pick up over the next few years, and we’re likely to see some interesting new products unveiled for 2019 at the show. The big question is this: Are mainstream consumer ready to bring a robot into their homes?

Single-Purpose Robots
To date, the biggest category of consumer robots has been single-purpose devices. The most popular: Vacuuming robots. This category has evolved from strictly high-end products from category pioneer iRobot years ago to a wide range of lower-priced (but often less “smart”) products from a long and growing list of lesser-known vendors. Today, robot vacuums make up an ever-increasingly percentage of the vacuums sold in the world, but I’d argue they are still not a truly mainstream product. What the vacuuming robot category has done is drive interest in other single-purpose robots. In recent years the category has expanded to include a wide range of other types of products, including robots that clean pools, empty gutters, mow lawns, and more.

Two of the key new features of some of the more advanced vacuuming robots is the addition of smartphone apps and integration with some of today’s smart assistants such as Amazon Alexa and Google Assistant. The introduction of apps brought a long list of new capabilities, from scheduling jobs to viewing reports. And smart assistant integration has made it possible to initiate jobs with your voice. It’s this last piece, voice-enabled robots, that really begins to make this category interesting.

Educational Bots
Voice integration is one of the key attributes of another key consumer robot category: Educational Robots. This segment includes a wide range of products, from robots that truly aim to educate children, to devices focused on entertaining people young and old. One of the more interesting companies in this space is Anki, which has shipped several small robots into the consumer market. The company’s latest is called Vector, and he’s an engaging little fellow that responds to voice commands and can do rudimentary tasks such as tell you the weather, set a timer, take a photo, and even play Blackjack. Perhaps more importantly, Vector is a small, relatively inexpensive ($250) product that utilizes Artificial Intelligence to get smarter—and to better understand its humans—the longer it is in your house. And Anki recently enabled Alexa support, too.

AI is currently an overused buzzword in tech, but when it comes to consumer robots, it’s going to be key to the evolution of the category. One of the pioneers of consumer robots, Sony, recently re-introduced its family-focused consumer robot dog Aibo. The first Aibo shipped back in 1999 and shipped through 2006. The relaunched version for the United States costs an eye-watering $2,900 but includes a three-year subscription to Sony’s AI Cloud. Using the cloud, Aibo uploads its day-to-day experiences, and over time it builds a database of these experiences, which leads to each dog having its own unique personality and traits.

What companies such as Anki and Sony are doing is utilizing AI to create robots that move beyond educational or entertaining to something more. They’re working to create highly personalized robots that know and understand their owners, and that can provide some level of real companionship over time. Some will find this endearing, others will find it creepy, but this is clearly the direction today’s consumer robots are headed.

Consumer Interest In Robots
To date, the market has proven there is an appetite for vacuuming robots. Smart robots such as Vector and Aibo haven’t been in market long enough to prove there’s a sustainable category here, yet, but the growth of smart assistants in the home—primarily in the form of smart speakers—seems to indicate that consumers broadly are warming to the idea of voice-enabled devices in the home. As others have noted, it’s not a stretch to assume that a company such as Amazon, which has both a consumer smart assistant in Alexa as well as an existing robot division (it bought Kiva Systems in 2012), will eventually bring to market a consumer robot. The question is, when? Based on my research, the company would be wise not to rush a product into market before consumers are ready.

At IDC we recently surveyed U.S. consumers about their interest in a robot for the home. The results were mixed at best. While more than a quarter of the 1,932 respondents said they were interested in a single-purpose robot, nearly 45% of respondents said they had no interest in buying a robot for the home. Interest in smart-assistant-enabled robots, security robots, and child- and elder-care robots were all in the single digits. We didn’t specifically ask about privacy, but its likely that this is one of the key blockers for many consumer after a tumultuous year where trust around privacy has taken a beating.

We went on to ask respondents who expressed an interest in consumer robots which brands they would like to buy from if they offered a robot in the future. Amazon and Apple topped the list, followed by Google, Samsung, and iRobot. At present, only the last two offer robots.

Obviously, consumer sentiment around robots for the home is far from fully formed. That said, interest is likely to grow as more of these types of products enter the market. This year will be an important one for the category in this regard, and we’re likely to see some very interesting products announced next week. However, it’s still very early days in the consumer robot market. It’s going to take years for the category to grows into a mainstream product segment, but it should be a very interesting ride along the way.

VR Begins Transition From (Failed) Next Big Thing to Sustainable Business

The ongoing theme in the media for much of the last 12 months has been that Virtual Reality (VR) as a technology is a bust. And from a pure headset shipment number perspective, it’s been hard to argue against that narrative. But as with most thing in this world, the reality is a bit more nuanced than that. Further, I would argue that VR is now poised to move from a technology burdened with unrealistic expectations to one that will enable smart vendors engaged in the space to build out more modest, but profitable and sustainable businesses going forward.

The Headset Decline
At IDC we track three categories of VR headsets: Screenless viewers, such as Samsung’s Gear VR, Tethered, such as HTC Vive, and standalone, such as Oculus Go. (We exclude Google Cardboard-based products from our numbers.) If we look back two years, to 3Q16, we see that the entire market shipped 2.4M units, and that screenless viewers constituted over 2M of those units. And more than half of those units came from Samsung. As it often does, Samsung was moving to establish itself in the new category by leveraging its strong position in smartphones, often bundling its Gear VR screenless viewer at low or no cost with its high-end Galaxy phones. The company continued this practice for some time, but that 3Q16 number was the high point, and by 2018 it had all but given up on the Gear VR. In 3Q18 Samsung shipped just 125K of its screenless viewers.

Samsung’s early push and later shift away from the screenless viewer category caused the overall VR market to appear to grow fast (from a small base) and then fall off a cliff. But inside the bigger numbers, the other two headset categories were continuing to evolve. HTC and Facebook lowered the price of their headsets, and later HTC launched a Pro version of the Vive. Sony launched PlayStation VR. A number of vendors launched products using Microsoft’s Mixed Reality platform. And Lenovo, Vive and Oculus launched standalone products. All of this led to some notable ups and downs along the way, but here’s the bottom line. In 3Q16 tethered shipments totalled 372K; in 3Q18 they hit 1.1M. In 3Q16 standalone headsets were at 30K; in 3Q18 they grew to 392K. So, yes, the total market in 3Q18 was down versus the same quarter in 2016 (1.9M versus 2.4M), but the product mix shifted dramatically and revenues grew substantially.

Early Adopters Are Happy
So the headset market itself has been a wild ride over the past two years. During that time, we’ve seen a substantial build-out of the existing platforms and the content available on then. The challenge, when it comes to pleasing consumers, is that the type and quality of content out there seems to please current owners, but it doesn’t excite non-VR headset owners enough to buy. We recently surveyed over 2,000 U.S. consumers, and among the small subset of that group who currently owned VR headsets, most are happy with both the hardware itself and the content (especially those who own tethered and standalone products). However, when we asked non-owners about their interest in VR, the response was tepid at best. The clear challenge here is that to date there’s been no specific application or type of content compelling enough to drive more mainstream users to deal with the cost and hassle of acquiring the VR hardware. This obviously creates challenge for the market: How do you incentivise developers and content producers to create better experiences without a large enough installed base? How do you grow the installed base without better content?

While the industry ponders the consumer challenge, many vendors in the space have moved to embrace a near-term opportunity upon which they can build a business in the meantime: Business users.

I have talked about the opportunities for VR in the commercial segment in a previous column, so I won’t repeat the argument here except to say that since I wrote that back in April, interest from commercial has only grown. And vendors are moving to embrace this interest.
HTC’s Vive Pro is a great example of a company listening to what business users said they need. The hardware addressed business requirements, including a higher resolution screen, and HTC’s Vive Business Edition package rolled in a professional use license, commercial warranty, deployment options, and dedicated phone support.

Facebook has also been paying close attention to the commercial side of things and has built out business-specific bundles for both its Oculus Rift and Oculus Go products. Likewise, Lenovo is now offering its Mirage Solo VR headset as part of a bundle targeting education deployments.
As the business use case for VR continues to solidify, the biggest hurdle won’t be the hardware itself, but the need for more developers—and the tools they need—to build out both broad business VR applications as well as company-specific, proprietary ones. This will be a challenge, but there’s money to be made here, and it involves significantly less risk than trying to create consumer content that requires dramatically more scale to drive profitability.

Looking ahead, there’s reason for cautious optimism in the VR market. Early next year, Facebook will ship its Oculus Quest headset, a standalone product that offers significant performance gains for the category. I expect Facebook to tell a strong story about the use case for Quest in business. And we’ll see Facebook, HTC, Lenovo, and others continue to build out more interesting use cases for both consumer and commercial users. VR may not have lived up to the early hype, but the technology still has a role to play in our world. Companies who stay the course, and play it smart, should find a profitable, sustainable way forward.

Despite the Rise of Voice, Screens Still Matter

Over the last few years, a great deal of discussion has focused on the importance of voice as the next big interface. The explosive growth of smart assistants, in everything from speakers to thermostats to kitchen appliances, bolsters the argument that voice is going to play a big roll in how we interact with technology going forward. But throughout the last few weeks, I’ve been struck by just how much attention good old-fashioned screens have been getting in new product announcements. From Apple’s recent launch of new, larger-screened iPhones, iPads, and Apple Watches, to new smart assistants with screens, to Samsung’s teasing debut of its first foldable smartphone, the fact is screens still matter a great deal.

Bigger is Better
I’ve been living with Apple’s new Series 4 watch and the iPhone XS Max and have been struck by just how impactful the new, larger screens has been to how I use the devices. On the watch, I’m using the Infograph watch face that puts a huge amount of information on screen. While I look forward to Apple adding more customizable complications to that watch face (I’d like to have access to messages there), the information density is amazing, making it possible for me to easily access most of the apps I use daily right on the home screen. And the larger screen has led to another usage change: I find myself consuming more content on the watch. No, I’m not going to read a 2000-word news story there, but the larger size makes it comfortable enough to read messages, short emails, and news alerts right on the watch instead of shifting over to the phone.

The huge screen on the iPhone XS Max has led to an evolution in my usage of that device, too. It’s large enough that I find myself perfectly content to consume more content on it than any phone before it. Traditionally, when I get home, I move from the phone to the tablet, but with this phone, I will often make it well into the evening before I think to make the switch. Which begged the question: With the iPhone getting bigger, what happens to the iPad?

Apple answered that question last week with the launch of its new 11-inch and 12.9-inch iPad Pros. The 11-inch product effectively offers extra screen real estate in a form factor that’s roughly the same size as the previous 10.5-inch product. The real show stopper, however, is the 12.9-inch product. The previous version of this larger iPad was so big it felt a bit unwieldy in hand, but this new product feels dramatically more comfortable to hold and use. While the starting price of $999 will be tough for many to swallow, I expect that big, beautiful screen to drive many to buy this product.

Voice Plus Display is Smarter
One of the most interesting product trends of late has been the move by vendors to add screens to smart assistant products. Amazon did it first with the Echo Play, and the company has continued to iterate in the space with a second generation of that product, as well as the smaller Echo Spot. Google and its partners have recently entered the space, too. I’ve been using Lenovo’s Smart Display with Google Assistant, and I’ve been massively impressed with the product. I’ve written before about how Amazon Echos have taken over our house (six units and counting). But after I installed the Lenovo product in my office, I find myself going to it to ask questions more and more often. I am consistently struck by how much more useful a smart assistant can be with a screen attached. From showing photos, time, and temperature in default mode, to displaying the results of questions asked (in addition to announcing them), to letting me initiate a task by voice but complete it via the touch screen, the simple fact is the whole experience with the smart assistant is smarter with a display.

Dual Screens and Foldable Screens
So, the bottom line here is that even as voice becomes more prevalent, for now, and likely well into the future, screens will continue to play an important role in how we interact with technology. Which is why so many companies are feverishly working to bring to market new dual-screen notebook and foldable screen smartphone products.

I’ve been skeptical about the use cases for products like these. And when Samsung showed off this week its prototype foldable phone, my first response was: What problem is this solving? Followed by: Who wants a double-wide phone screen? While I’m not convinced that the first generation of these products will drive a ton of utility for most people, and the software challenges around the impact to user interface will be harder to address then most people realize, the fact is they could solve a problem: People’s desire for ever bigger screens.

The first generation of these products are important and necessary steps the industry must undertake. I’ve been following the display industry for many years, and while there have been many, many prototypes, the real learning happens when real products ship. So while I likely won’t be standing in any lines to buy the first dual screen or foldable device, I am very interested in the next-generation of form factors and use cases that will spring from what they learn.

At the end of the day, it’s all about offering even more screen real estate in existing or smaller form factors. While voice continues to improve as a mode of interfacing with technology, it still has a long way to go. In the near term, that means companies will continue to find ways to bring larger physical displays to products. Eventually, we’ll get to a point where the only way to offer a bigger screen will be in a pair of augmented reality glasses, where the entire world in front of you is the screen.

Magic Leap Is Asking the Right Questions

Magic Leap this week held its first-ever developer conference called LEAPCon, where the company made a wide range of interesting and promising development and product announcements. But the company, its executives, and its partners also did something unique during the opening keynote: They asked developers not to merely rush into the creation of new content for the platform they call spatial computing, but to stop and think about why they are creating it, and what they can do to make it more inclusive.

New Platform, New Possibilities
Over the years, I’ve been publicly skeptical of Magic Leap’s ability to deliver upon some of the outsized promises it seemed to make around its technology. When the company’s first developer hardware kit, the Magic Leap One, shipped in August, many saw it as proof it had radically overpromised what it could do. I was a bit more forgiving, seeing it as the first hardware step on a long journey. At LEAPCon, Magic Leap CEO Rony Abovitz and company did something equally important to shipping that first piece of hardware: They talked about the future they hope people will build with this new technology.

Abovitz kicked off the keynote by acknowledging something few CEOs would: a troubled world. “Today our world feels divided. It feels broken. Our new medium of spatial computing is fresh; it doesn’t carry the baggage and negative headlines that are dominating the news today,” he said. “As a creative collective, we can refuse to perpetuate the baggage that weighs down traditional mediums that came before like radio, television, and film. They are great, but there is all this baggage. Spatial computing can be a safe haven, and a creative space to include all who respect each other.”

Abovitz then ceded the stage to his Chief Marketing Officer, Brenda Freeman, and CEO of Funomena, Robin Hunicke. Freeman noted that at present, the “magic verse” is a blank slate that affords creatives the opportunity to do things differently. To create an ecosystem that’s “vibrant, future forward, and culturally relevant.”

Hunicke acknowledge that the world today often feels overwhelming and uncertain and that people don’t always feel empowered to make change. “We are here today to talk and think and dream about adding a new dimension to our reality,” she said. “The possibilities are infinite, and the landscape of this work is fresh and new. It is relatively unexplored. This means the power we have to shape the future is incredible.” She went on to ask a question that not enough people in technology ask themselves: “What does that power mean, and how are we going to use it?”

“When platforms like radio, television, and film were first developed, diversity and inclusion weren’t part of the Zeitgeist,” she said. “When the internet, game consoles, and cell phone technology were first invented we were not yet quite honest with ourselves about how our unconscious biases would shape these new forms of communication, perpetuating stereotypes that alienate people from one another and sometimes from themselves. It may have been harder then to predict what such unbalanced representation in our creative, financing, marketing, and promotion structures would do to the industries that sprung up around these powerful technologies. Or how this unbalanced representation would tax our society long term. But this is no longer the case. We have a precious opportunity to ask ourselves some very important and difficult questions about what we build, why we build it, and who we build it for.”

“We can and should hold ourselves accountable to a higher standard,” she said. “To discuss and debate not just what is possible for the technology, or who will be in the marketplace, but what is ethical, important, and universal about this new dimension of reality.”

Thinking Before We Leap
It’s easy to be cynical about aspirational comments like these, especially when it comes from a company that’s hasn’t always lived up to some of the ideals being discussed. And we don’t have to agree with all the backward-looking cause and effects discussed to acknowledge that the technology industry often build new tools, platforms, and mediums without fully contemplating the impact they may have on the world. Too often, we build it because we can.

I’ve been talking about the possibilities of augmented reality for a few years now, and I continue to think this technology will have a world-changing impact. I’m an optimist, and I think that change can be for good, but there is no doubt that when we bring together the real world and the digital world this way, there is risk involved. This week there was a heartbreaking story about bullying on Instagram. Now imagine that happening on a platform where digital and physical worlds are merged together. And these are just kids; imagine what’s possible in the broader world with truly bad actors at work.

But that doesn’t mean we shouldn’t create these technologies. It means we have to do so with open eyes. I find it heartening that Magic Leap devoted time at the beginning of its keynote to discuss these important topics. Will the company or the broader augmented reality industry always get it right? Probably not. But at least they’re thinking about these questions now, at the beginning, when there is still the opportunity to build it right the first time.

Microsoft’s WVD Demonstrates A Smart, Evolved View of Windows

There was quite a bit of news out of Microsoft’s Ignite conference in Orlando this week, but its announcement of Windows Virtual Desktop (WVD) was one of the most monumental to my mind. WVD represents not just a strong product offering in an increasingly competitive space, but it also reflects the much smarter, more evolved view of Windows that Microsoft has embraced under current CEO Satya Nadella.

Virtual Desktop Basics
Virtual desktops aren’t new, and Microsoft’s partners–and competitors–have been offering access to Windows in this manner for years. In short: A virtual desktop is one that runs on a server or in the cloud, accessed via a client app or browser on a client endpoint. In the past, these endpoints tended to be low-cost thin client hardware, which companies used to replace more costly PCs. Today, pretty much any device with a browser can act as a virtual desktop endpoint, from phones and tablets to non-Windows PCs running Google’s Chrome and Apple’s MacOS.

Virtual desktops are one of those technologies that has always looked great on paper, but often disappointed in practice due to their high reliance on a stable, fast network connection. As network speeds and quality of service has improved over time, and as LTE has become more prevalent, virtual desktops have become increasingly viable. And the pending rollout of 5G should drive even better performance over mobile networks. Over the years, Microsoft partners have rolled out increasingly capable Windows-based offerings, and so have direct Microsoft cloud competitors. Microsoft executives were quick to note at Ignite that existing partners will be able to leverage (and sell) WVD and that its goal with this announcement was to offer a differentiated product that better positions it against competitors such as Amazon and Google.

WVD’s Special Sauce
Microsoft has put together a compelling package with WVD, which will launch as a preview later this year. One of the most notable features is the ease with which current customers can spin up virtual machines and the flexibility around licensing and cost. Existing customers with Microsoft 365 Enterprise and Education E3 and E5 subscriptions can access WVD at no extra charge, paying Microsoft for the Azure storage and compute utilized by the virtual desktops.

Microsoft says WVD is the only licensed provider of multi-user virtual desktops. Multi-user means that a company can provision a high-performance virtual desktop and then assign more than one user to that desktop, leveraging the performance and storage across more than one employee. Microsoft also says that WVD users will access Office 365 Pro Plus optimizations, for a smoother virtualized Office experience.

Finally, Microsoft announced that WVD users will have the ability to run Windows 7 desktops well beyond the January 2020 End-of-Life date. This will allow companies that are behind in their Windows 10 transition, or who are struggling to move propriety apps to the new OS, more time to make the move. For many, this feature alone may represent a strong reason to try WVD.

An Evolved View of Windows
Microsoft’s WVD looks to be a compelling product, and I look forward to testing it out when it becomes available. But beyond the announcement itself, I’m most impressed by what it signifies about the company’s evolution in thinking around Windows. Microsoft under Bill Gates or Steve Balmer could have offered a version of WVD, but it never did. And if it had, you can be sure the company would have charged a licensing fee for every single virtual desktop it served up.

Under Nadella, the company has moved away from Windows as the product that it must sell, and protect, at all costs. Today, it’s willing to offer an easy-to-deploy virtual desktop to existing licensees to drive more customers toward Azure and its Microsoft and Office 365 offerings. Perhaps just as important is the underlying (and unspoken) acknowledgment that the installed base of traditional endpoint devices running Windows natively has likely peaked, while the number of primarily mobile devices running other OSes will continue to grow. By offering a best-in-breed experience that lets companies and employees run a Windows desktop on these devices when needed using nothing more than a browser, Microsoft helps ensure that Windows remains an important business platform well into the future.

Series 4 Watch Could Grow Apple’s TAM and its ASP

Apple’s launch of the Series 4 Watch this week will likely have a very positive impact on the company’s total available market (TAM) for wearables as well as its average selling price (ASP). The watch’s new, bigger display and faster custom silicon should help drive a refresh among current Apple Watch owners. More importantly, the addition of new health-focused features—including two FDA-cleared apps—should drive a dramatic increase in interest from patients, doctors, insurers, and many others. And Apple took the opportunity with this strong hardware update to also announce an increase to the starting price of the new watch to $399, while keeping the Series 3 in the market at a new lower price ($279) that’s still a $30 premium over the value-focused Series 1 it was shipping prior to the announcement.

In other words, Apple is now likely to sell more watches to more people–at a higher average selling price–than in the recent past. That’s quite a move, especially when you consider that it wasn’t long ago that many skeptics were still calling the Apple Watch a flop.

Bigger Display, Faster Chip
The Series 4 represents the first time the company has made a big change to the size of the Apple Watch display, increasing its two sizes from 38 and 42 mm to 40 and 44 mm. In addition to the larger size, Apple has narrowed the bezels, resulting in a roughly 30% increase in viewable area, which is very noticeable on the wrist. Apple leverages this new display with reworked watch faces, including many with notably more complications, which drives up the information density on the display dramatically.

To my eye, Apple does this so well the display doesn’t look overcrowded or cluttered. And one of the advantages to having all these complications on the screen is that the wearer has easier access to the underlying apps, which has always been one of the biggest interface challenges for wearables.

In addition to the larger screen, Apple also made the Series 4 ever-so-slightly thinner, added new haptic feedback to the crown, and maintained backward compatibility with existing watchbands. As with previous years, some of the biggest changes happened inside the device. Apple says the new S4 chip is twice as fast as the previous generation chip while maintaining a comparable level of battery life performance. The first generation Apple Watch was painfully slow, but each iteration as seen dramatic performance gains, and with the Series 3 I’ve found performance to be more than adequate for the vast majority of tasks. It will be interesting to see how Apple—and potentially third-party developers—will leverage the improved performance of the Series 4 going forward.

Heart-Rate Apps
The new display and faster internals are great for current Apple watch owners who like their current device and who are looking for a reason to upgrade. But the addition of some very specific health-related features is what may well drive a sizeable increase in Apple’s total available market. First, the company improved the gyroscope and the accelerometer so the watch can detect if the wearer has suffered a fall. Next, Apple added new heart-rate technologies that can detect a low heart rate and screen for Atrial Fibrillation (AFib). Also, the company added sensors in the back of the watch and in the crown that lets the wearer run an electrocardiograph (ECG).

In addition, the Watch will store the results of these tests in the Health app for sharing with your doctor via PDF. It’s very notable that the FDA cleared the ECG and AFib apps (the FDA “clears” Type II devices such as the Apple Watch, while it “approves” higher-risk Type III devices such as pacemakers.) This FDA clearance could make the Apple Watch much more attractive to a wider range of potential buyers and could drive meaningful volumes as doctors, insurance companies, and caregivers think about buying these devices for people who would never buy it for themselves. Apple says the ECG app will appear later this year, after the product launch later this month.

Driving iPhone Stickiness
Even before the Series 4 announcements, I’d been thinking about the positive benefits Apple Watch has been driving for the company. Obviously, there are the revenues of the product itself, which Apple claims is now the number one selling watch in the world, surpassing all other wearable vendors AND traditional watch vendors. Perhaps just as important, however, is the stickiness that the Watch drives for iPhone. I know many Apple Watch users that are arguably more attached to their Apple Watch than to their iPhone. The symbiotic relationship between the phone and watch means few of these users will leave the iPhone, even if they wanted to. And then consider the potential new Apple Watch customers who might currently be using an Android phone or even a feature phone. To fully utilize the watch, they too will need to buy an iPhone. That’s a position most smartphone vendors would love to find themselves in.

And all of this is before we consider the real possibility that the Series 4’s higher price –alongside the retention of the Series 3 at a higher entry-level price–will drive a higher average selling price for Apple Watch in the all-important fourth quarter of the calendar year. So in the space of just over three years Apple launched the product, later lowered prices to drive adoption, and now feels its market position and new hardware are strong enough to warrant a higher starting price. It’s unlikely Apple will change its policy of not reporting Apple Watch shipments and ASPs in its earnings calls. But if this new hardware does what I expect it to do, Tim Cook and his team may decide they do, in fact, want to share a few more specifics around Apple Watch’s market performance over the holidays.

Windows on ARM: Good Today, Better Tomorrow

I’ve spent the last few weeks using Lenovo’s Miix 630 detachable product that utilizes Qualcomm’s 835 Snapdragon processor running Windows 10 Pro (upgraded from Windows 10S). It hasn’t been an entirely smooth experience, and there is still work to be done on this new platform (especially regarding a few key apps). But this combination of Windows and ARM is undeniably powerful for a frequent business traveler such as me. Early challenges aside, it’s hard not to see Qualcomm, and eventually, the broader ARM ecosystem, playing a key role in the PC market down the road.

The Good
As I type this, I’m finishing up a New York City trip where I attended ten meetings in two days. I needed access to–and the ability to quickly manipulate–Web-based data during these meetings, a task that I’ve never been able to accomplish well on my LTE-enabled iPad Pro. So I typically bring my PC and a mobile hotspot so I can stay connected in Manhattan throughout the day. I carry my computer bag, too, because I need to carry the power brick because I invariably need to plug in my notebook at some point or risk running out of power before the end of the day. This time out, I left the mobile hotspot, power cord, and computer bag behind, carrying just the Miix. I used it throughout the day, both during meetings and in the times in between. The LTE connection was strong throughout, and I didn’t experience any performance issues. When I returned to the hotel room after 6 pm, after close to 11 hours of pretty much constant use, I checked the battery: 52%.

That’s a game changer, folks. It’s actually a bit hard to describe just how freeing it is to spend the day using a PC without worrying about connectivity or battery life. With battery-saver mode enabled, I could well have accomplished two days of meetings without needing a charge. Does everybody care about these things? Obviously, not. Would I swap this device for my standard PC where I perform more heavy workloads? No, not today.

But I’m beginning to think that day may be closer than many expect.

The Bad
I’ve come to realize that my most-preferred form factor for work-related tasks is a notebook (which is why I’m excited to see Lenovo has already announced plans for the Snapdragon-powered Yoga C630). That said, the Miix 630 is a solid detachable with a good display, somewhat oversized bezels, and a reasonably good keyboard. However, at $899 list, it is quite expensive for a device that what most people would use as a secondary computer. And it doesn’t help that Qualcomm announced the follow-on 850 chip before Lenovo had even begun shipping this product to customers.

And at present, this product—and other Windows on Snapdragon products—must remain secondary product because some limitations prevent it from being a primary PC for many users. Performance is one, although honestly I didn’t find the performance to be that limiting on this machine when using it for my described tasks (Lenovo seems to have done a good job of tuning the system). The main reason these products will have to serve as secondary devices is that there are still some deal-breaking app challenges. For me, the primary one was the fact that I couldn’t install and use Skype for Business, which is the primary way I communicate with my work colleagues and how my company conducts meetings. I was able to work around the meeting problem by joining meetings via the Web-based version of Skype for business, but there’s no way to do that for instant-messaging communication. I had a similar problem with Microsoft Teams, but there’s also a Web-based workaround for that program.

I understand the challenges Microsoft faces with making its ever-broadening portfolio of apps work on this new version of Windows, but the fact that I couldn’t use this important first-party app is pretty frustrating.

The Future
Microsoft still has some work to do in terms of app compatibility, but I’m hopeful the company will sort much of this out in the coming months. In the meantime, we now know that not only does Qualcomm have strong plans for future PC-centric chips, but ARM itself has now announced a roadmap that it promises will usher in next-generation chips from other licensees that should offer desktop-caliber performance with smartphone-level power requirements.

Of course, there are still plenty of other hurdles to address. Many IT organizations will push back on the idea of ARM-based PCs, with Intel understandably helping to lead that charge. There’s the ongoing issue of cost and complexity when it comes to carrier plans. Finally, there’s a great deal of education that will need to happen inside the industry itself around the benefits of this platform.

In the end, I’m confident that Windows on Snapdragon (and Windows on ARM more broadly) is going to eventually coalesce into an important part of the PC market, especially as 5G becomes pervasive in the next few years. I fully expect many long-time PC users to question its necessity, but I also expect a small but growing percentage of users to have the same types of “ah ha” moments that I did when testing out systems. And, perhaps most importantly, I believe future iterations of these devices are going to appeal a great deal to the next-generation of users who expect their PCs to act more like the smartphones and tablets they grew up using.

Cortana and Alexa: The Next Step Forward for Voice

This week Amazon and Microsoft announced the rollout of Alexa and Cortana integration. First discussed publicly one year ago, the collaboration represents an important step forward for smart assistants today and voice as an interface in the future. I’ve been using Alexa to connect to Cortana, and Cortana to connect to Alexa, and while it’s clearly still in the earliest stages of development, it generally works pretty well. The fact that these two companies are working together—and other notables in the category are not—could offer crucial clues about the ways this all plays out over time.

Cortana, Meet Alexa
Enabling the two assistants to talk to each other is straightforward assuming you’re already using both individually. You enable the Cortana skill in the Alexa app and sign into your Microsoft account. Next, you enable Alexa on Cortana and sign into your Amazon account. To engage the “visiting” assistant, you asked the resident one to open the other. So you ask Alexa to “open Cortana” and Cortana to “open Alexa.” In my limited time using the two, I found that accessing Cortana via Alexa on my Echo speaker seemed to work better than accessing Alexa via Cortana on my notebook. Your mileage may vary.

One of the biggest issues right now is that it gets quite cumbersome asking one assistant to open the other so that you can then ask that assistant to do something for you. One of the reasons Alexa has gained such a strong following—and is the dominant smart assistant in our home (four Dots, two Echos, and two Fire tablets and counting)—is because it typically just works. The reason it just works is that Amazon has done a fantastic job of training we Echo users to engage Alexa the right way. It’s done this by sending out weekly emails that detail updates to existing skills as well as introducing new ones. Alexa hasn’t so much learned how we humans want to interact with her. Instead, we’ve adapted to the way she needs us to interact with her.

The issue with accessing Alexa through Cortana is that we lose that simplicity. I found myself trying to remember how I needed to engage Alexa while talking to the microphone on my notebook (Cortana). The muscle memory I’ve built around using Alexa kept getting short-circuited when I tried to access it through Cortana. I suspect this will self-correct with increased usage, but it’s obviously an issue today.

That said, even at this early stage, the potential around this collaboration is clear and powerful.

Blurring of Work and Home
We all know that the lines between our work lives and home lives are less clear than ever before. Most of us use a combination of personal and work devices throughout the day, accessing throughout the day both commercial and consumer apps and services. But when it comes smart assistants, the lines between home and work have remained largely unblurred. As a result, today Amazon has a strong grip on the things I do at home, from setting timers to listening to music to accessing smart-home devices such as connected lightbulbs, thermostats, and security systems. But Alexa know very little about my work life. Here, I’d argue, Microsoft rules, as my company uses Office 365, and Cortana can tap into my Outlook email and calendar, Skype, and LinkedIn among other things.

During my testing, I did things such as ask Alexa to open Cortana and check my most recent Outlook emails, or to access my calendar and read off the meetings scheduled for the next day. Conversely, I asked Cortana to open Alexa and check the setting of my Ecobee smart thermostat and to turn on my Philips Hue lights.

Probably the biggest challenge around this collaboration, once we get past the speed bump of asking one assistant to open another, is the need to discern individual users and then address their privacy and security requirements when working across assistants. Now that I’ve personally linked Alexa and Cortana, anyone in my house can ask Alexa to open Cortana and read off the work emails that previously were accessible only through Cortana (on a password-secured notebook). That’s a security hole they need to fill, and soon. The most obvious way to do this is for each of these assistants to recognize when I am asking for something versus when other members of my household (or visitors) are doing it.

Will Apple, Google, and Samsung Follow?
It makes abundant sense for Amazon and Microsoft to be first into the pool on this level of collaboration. While the two companies obviously compete in many markets, Cortana and Alexa represent an area where I’d argue both sides win by working together. I look forward to seeing where the two take this integration over the next few years.

But what about the other big players? Among the other three serving primarily English-speaking markets, I could imagine Samsung seeing a strong reason to cooperate with others. It’s Bixby trails the others in terms of capabilities, but the company’s hardware installed base is substantial. At present, however, it seems less likely that either Apple with Siri or Google with Google Assistant would be interested in joining forces with others. With a strong position on the devices most people have with them day and night (smartphones), both undoubtedly see little reason to extend an olive branch to the competition. Near-term this might be the right decision from a business perspective. But longer term I’m concerned it will slow progress in the space and lead to high levels of frustration among users who would like to see all of these smart assistants working together.

Rethinking Conventional Wisdom Around Hardware ASPs

Apple’s recent earnings call, where the company revealed its iPhone average selling price (ASP) for the second quarter grew to $724, stunned many industry watchers. And while it’s true that no other smartphone vendor is selling phones at near that price in the same volumes as Apple, the reality is that four of the top five smartphone vendors worldwide have seen their ASPs increase over the course of the last year. In addition, that top five is also consolidating share, reflecting the maturity of the market.

Samsung is the Exception
Looking at 2016 and 2017 smartphone data from IDC’s Mobile Phone Tracker, four of the top five vendors in 2017 have seen their ASPs increase. This includes Apple, Huawei, OPPO, and Xiaomi. Only Samsung’s ASPs declined during this period and based on the company’s most recent earnings call this trend seems to be continuing. As noted, nobody among this group is operating at the same level as Apple, but it’s very interesting to see the Chinese vendors successfully shifting their mix toward higher selling prices. For example, Huawei saw its ASP increase from $231 in 2016 to $255 in 2017. Both OPPO and Xiaomi increased ASPs, as well, although both started at lower price points. Meanwhile, Samsung’s dropped from $319 to $318, down from $344 in 2015. (Why Samsung’s ASPs are trending down is a subject for another column but suffice to say it’s a wide range of reasons, from marketing to product mix to stiff competition.)

We only have one-quarter of data for 2018, so it’s unclear whether these trends will continue in 2018, but the fact that many of these vendors are seeing their ASPs increase over time does fly in the face of conventional wisdom around the presumed eventuality of hardware commoditization. The other thing that’s important to note is during this same time the percentage of share the top five owned, relative to the rest of the market, increased from 56% in 2016 to 60% in 2017. In other words, the top five is gobbling up more of the market, and it’s doing so at higher ASPs year over year. A quick look at the next five vendors down (numbers 5-10) show that about half of this group has also managed to grow its ASPs in the last year, too. In fact, if we look at the entire category, the average selling price of a smartphone increased by about $30 between 2016 and 2017. And the trend looks to be continuing in 2018.

It’s also important to note the fact that in addition to its sky-high ASPs, Apple continues to grow the adjacent businesses that help increase lock-in the iPhone. This includes it wearables products—including Apple Watch, AirPods, and Beats—as well as services, including Apple Music, iCloud, Apple Pay, and more. These accessories and services make the high-priced iPhones even stickier, helping to ensure a high percentage of current buyers return when its time for a new phone. Others in the top five have attempted similar strategies, with varying degrees of success. Perhaps the most interesting in this regard is Xiaomi, which offers a wide range of products and services. The company says it has an installed base of about 190 million active users, with as small but growing group that owns multiple products from the company.

Desktop and Notebook ASPs
Smartphones aren’t the only major hardware category that is defying the drumbeat of conventional thought around commoditization. When I look at IDC’s Personal Computing Devices Tracker, I see that ASPs for both notebooks and desktops are also growing among the top five vendors. Across the board, HP Inc, Lenovo, Dell, Apple, and ASUS all saw their ASPs for notebooks increase from 2016 to 2017. The top five’s share of the market stayed about the same at 80% (up from 77% in 2015). Across the market, the increase from 2016 to 2017 was about $47. Desktops also saw ASPs increase from 2016 to 2017, with Lenovo, HP Inc, Dell, Acer, and Apple all seeing positive shifts. Here the top five grew its share to 61% in 2017, up from 59% in 2016 and 57% in 2015. Category-wide, the ASP increase was about $32. The still-small detachable category also grew its worldwide ASP during this period, by $16.

Only the slate tablet category has bucked this increasing ASP trend. This market, which as struggled overall, continues to see ASPs declining. This is true even for Apple, and I believe it reflects the unsettled nature of this market overall.
Time to Reconsider Conventional Wisdom?

So, except for slate tablets, all these hardware categories seem to be defying conventional wisdom about hardware commoditization. And that’s before you factor in the all the dollars associated with accessories and services, an area where Apple has clearly succeeded with consumers, and where other players—such as HP Inc, Lenovo, and Dell, and are finding increasing success with commercial buyers. (I’d suggest that Device as Service will play an increasingly important role here going forward.)

So here I’ll echo the sentiment of one of my fellow columnists in suggesting that perhaps it is time to reconsider the conventional wisdom around device markets. Clearly, consumers and commercial users know what they want and need, and many are willing and able to pay a bit more than they have in the past for the devices that they use every day. Quality design, new features, and integrated services matter. So perhaps it is time the world stops assuming every hardware category must eventually end in a highly commoditized state where ASPs are in perpetual decline.