This was a week of contrasts here in Boston. On the one hand, we saw the launch of Uber Eats, where well-heeled Bostonians can now have everything from donuts to sushi delivered to their doorstep within 30 minutes. Then yesterday, a Boston Globe article, “Want Healthy Food? In Much of Mass. It’s Hard To Get”, pointed out that, in Springfield, the state’s third-largest city, “It’s not hard to find a McDonald’s in the Mason Square section of Springfield. Liz O’Gilvie has counted 10 within a mile and three-quarters of her home. But the nearest full-service grocery store, with plump apples and curly kale? That’s 2 miles away, and going that distance on public transit requires a two-hour trek on three buses.”
Which got me thinking, here’s a possible ‘win-win’ opportunity for Uber, which has come under criticism for some of its practices and the poor behavior of some of its executives. A large number of low-income Americans don’t have a car and/or live in so-called “food deserts”, relying on fast food or overpriced packaged food from local convenience stores. There are now initiatives in Massachusetts and several other states to develop creative financing mechanisms to help fund the development of grocery stores and others means of ‘food access’ in low-income areas. There are also several government options, from food stamps to vouchers, and ‘food trust’ programs that provide reduced prices for groceries, if you can get there.
Perhaps there is a way to put some of this funding into helping people get to places where they can buy healthier food at reasonable prices. Many ‘food deserts’ are in areas where there is inadequate public transportation and taxis either don’t exist or are very expensive. Ride-sharing services such as Uber and Lyft could provide a better option.
This wouldn’t be all that hard to implement. In certain geographies, for a trip to a grocery store that’s more than a mile away, Uber or Lyft could add a discount code or some other option, such as a pop-up “Groceries” icon, to enable a free or reduced-price trip. The app could be smart enough to work for trips to a specified set of grocery stores in an area. I am sure Uber could work with the federal government and local agencies to help subsidize or provide some funding for some of these programs in return for a tax break or other incentives. This might end up being cheaper for the government and local transportation agencies than some of the programs in place today that seem to be perpetually on the chopping block. Plus, it’s likely a healthy percentage of the drivers participating in this proposed program would come from the local community, so there’s a benefit there as well.
Perhaps we could get some of the larger grocery retailers or big box chains such as Costco or Walmart to participate as well. Let’s say a roundtrip to the local grocery store costs $15. Perhaps the user kicks in $5, with the remaining 2/3 covered by a combination of the ride sharing company, public funding, and the retailer. With the apps, data, and proliferation of payment options/services, implementing such a program would, logistically, be far easier to accomplish than even ten years ago.
Doing some good wouldn’t hurt Uber’s image, either. Imagine if Uber, using data gained from these rides, could say, “In 2017, we enabled one million food shopping trips for low-income Americans who lacked good transportation options”. While I’m all for Instacart, Uber Eats, and other services that deliver groceries and meals to your office or home, let’s face it, these services are urban-centric, priced at a premium, and are generally for the well-heeled and/or super-busy. If we put half the energy into helping people living in ‘food deserts’ get to food as we have into apps that get food to affluent folks living in ‘food oases’, we could enable healthier eating and cost savings to millions of people.
Perhaps the most surprising part of Apple’s earnings for many was the clarity for Apple Watch, with Apple stating sales had doubled year-over-year. Perhaps more interestingly, Tim Cook explained Apple Watch sales doubled in six of the top ten markets where it is sold.
This week saw the launch of yet another streaming pay TV service, this time from Hulu. Hulu marks the fifth major company to enter this market over the last couple of years, following Sling, Sony, DirecTV, and YouTube. Each offering has its strengths and weaknesses and each makes different trade-offs in trying to achieve the mythical sweet spot for the cord cutter. Local channels continue to be the biggest challenge but another is trying to create bundles consumers will go for and each company has taken a different approach.
The Mythical $35 Price Point
I’ve often noted these over-the-top pay TV providers seem to believe there’s a mythical price point around $35 at which cord cutters will leap to buy their service. Each provider seems to aim at that target with at least one of its offerings, though we’ve seen those strategies evolve over time. Hulu is the only provider not to offer at least one package at or below $35 and that’s at least in part because it packages its $8 or $12 video-on-demand service into its $40 standard package.
But to hit that $35 price point, these companies have to ditch many of the channels that have driven the average traditional pay TV spend per month to around $100. Which ones to ditch? The sports channels are among the most expensive, so that seems an obvious place to start, but they’re also one of the few things keeping live TV alive and a key requirement for many cord cutters. Only one company – Sling – has kept ESPN out of any of its base channel lineups, while all the others include at least one ESPN channel in every package and several in the more expensive ones. YouTube solved the problem by dealing almost exclusively with the owners of the four major broadcast networks, so it includes their channels but excludes Turner, Viacom, Scripps, and a number of other key channels. Given how much sports is either on regional sports networks or Turner channels (particularly basketball), they’re not offering a comprehensive lineup. Viacom has been hardest hit by these OTT packages, with only DirecTV and Sling carrying their channels and the others taking a pass.
The reality is even though these companies, for the most part, seem to be aiming at that sweet spot of $35 or under, it’s quite possible to spend an amount monthly that’s much closer to the traditional TV package. Playstation Vue’s top package costs $65 before add-ons, while Hulu’s offering can get up to $65 with extra features and channels. DirecTV’s base packages top out at $70 before add-ons. Consumers have to be really committed to ditching the cable company to go for these packages which offer few savings at these higher price points, especially given the holes in some of the lineups.
One of the great possibilities that should come with OTT pay TV services is flexibility. After all, people don’t just want to pay less but to also have more control over which channels they get and pay for, ideally moving towards an a la carte approach. And yet Hulu and YouTube TV offer minimal flexibility at this point, with a single base package with the option to add Showtime. The other three, however, offer more choices, with two to four base packages each and, in Sling’s case, a great number of add-on channel packages to suit topical interests or even channels from other countries. It’s gone so far as to call what it’s offering today “a la carte TV”. Even though that’s a bit of a stretch, it’s certainly closer to realizing that ambition than any of the others. Meanwhile, the standard packages often bundle channels in a way that makes little sense to the consumer, mixing sports and news, lifestyle and movies in seemingly random ways that likely reflect deals with content owners far more than true consumer interests.
Some of these players have piled on features in the hopes these will entice customers looking for more than just a screaming deal on a smaller set of channels. DVR functionality is deemed to be a major draw. Most of the offerings have a DVR component, though it’s often a poor substitute for a real DVR, with limitations on skipping ads being the biggest bugbear. Some make up for it with VoD services but those also often show ads, reflecting just how much power the traditional content owners still have and how much TV business models still need ads to survive. User interfaces are another potential differentiator, with each company having its own take on how to reinvent the electronic programming guide. Some favor familiarity and a more traditional approach while others, including Hulu, focus on recommendations and a completely new (and unfamiliar) user interface. None of those I’ve tried (and I’ve tried all but Playstation Vue) have cracked it and several have either awful user interfaces altogether or significant issues.
Part of the promise of future TV services is the ability to watch what you want, when you want, where you want, including on the device of your choosing. With that in mind, these services certainly give you options that go far beyond a traditional set-top box but they don’t all do equally well in supporting a wide range of devices. Interestingly, Playstation Vue, which was very limited in its device support at first, now leads in this department but new offerings like YouTube and Hulu are still lacking. Not all offer web interfaces either, requiring users to either download native apps on their computers or stick to other devices.
Local Channels still the Biggest Issue
As I wrote several years ago, local channels were always likely to be the biggest challenge facing streaming pay TV providers because of the structure of the US market and its affiliate system for TV stations. Because many local stations aren’t owned by the broadcasters, the latter have little control over getting those stations on board as part of a national rollout. As such, each of the services has made its own decision about how to roll out local channels. In some cases, offering all the major channels in theory but in practice only in limited geographic areas, while YouTube TV is only available at all in the areas where it has good local channel support, as befits its strong ties to the broadcasters. Playstation seems to be doing better on the CBS side, leveraging the work CBS has done for its own All Access service and is the only one of these services to offer any local channels (and even then only one) where I live in Utah.
A Growing but Frustrating Set of Options
What we’re left with, then, is a growing but ultimately frustrating set of options for those wanting to ditch their traditional pay TV provider and find a cheaper, more flexible, more modern alternative. Each of these services has its pros and cons, with some leading on content flexibility but lacking on the user interface, while others major in features but force users into narrow channel packages. The table below summarizes the current situation as well as I can – one other thing I’ve found in researching these services is how hard they make it to easily see the channel lineups, pricing, and features. None is great at this.
For now, would-be cord cutters are often left choosing the best of a set of bad options, or even combining several of these to get what they really want. What I was most struck by with Hulu’s launch this week is how it’s become my go-to for video on demand but adding live to the experience – especially missing local channels – adds far less than $30 of additional value. What I want is a service that combines Hulu-like breadth of on-demand content with a live option for the major sports I watch and I guess I’m not alone in that. If I could combine Netflix, Hulu, and an on-demand sports service that carried all the games I care about, that would serve me well but it doesn’t exist today. We can only hope that someday it will.
On Tuesday, Microsoft held an event in New York where it presented its new version of Windows called Windows 10 S as well as the new Surface Laptop. With the combination of the two, plus apps targeted at teachers and educators, Microsoft is hoping to gain traction in the K-12 as well as higher education.
In January 2017 at the BETT show in London, Microsoft announced “Intune for Education” which delivered a simple device management solution for schools that can customize over 150 settings, apply them to hardware and apps, and assign them to a student so they “follow” any device they use as they log in. Microsoft also announced a partnership with Acer, HP, and Lenovo to bring to market Windows 10 PCs starting at $189 including some 2-in-1s.
Chromebooks have been steadily growing in US education market which, according to FutureSource Consulting, represented close to 13 million units in 2016, of which 58% were Chromebooks. While most of the commentary around Chromebooks’ success rests on hardware pricing, there is a lot in the simplicity of the platform that is a big appeal for schools. However, with prices as low as $120, competing against Chromebooks is not an easy task.
Windows 10 S aims at taking Microsoft a step further from what we have seen thus far, especially when it comes to the initial set up of devices and subsequent management. By stripping down Windows 10 to its essential components and granting access to only store apps, Microsoft is hoping to deliver the simplicity schools are looking for.
Windows 10 S will need OEM Support to make a Difference
The battle in education is, however, a Windows/Microsoft battle for now, not an OEM battle, as most Microsoft hardware partners are selling Chromebooks. While Microsoft announced a list of partners that will bring to market Windows 10 S devices, the commitment will be judged on how many models, channel support, and overall push we will see from brands such as HP, Acer, and Dell.
No details have been given on the royalty OEMs will pay Microsoft for preloading Windows 10 S and how that differs from Windows 10 Home and Windows 10 Pro. Nor have we heard whether Microsoft will help in any other way, such as marketing, to position the devices. My guess is Microsoft will have to do something, at least initially, so that Windows 10 S actually gets a shot to prove itself.
The Surface Laptop competes with the MacBook Air not Chromebooks
Looking at the Surface Laptop Microsoft announced during the event and dismissing Microsoft’s chances to compete against Chromebooks is a mistake. Surface Laptop, in my mind, has a different role to play.
First, it plays to Millennials’ need for a laptop form factor vs. a 2-in-1 or a tablet. In a recent study Creative Strategies conducted in the US, college students clearly shared their preference for a traditional laptop form factor with 73% primarily using a laptop when working on a school or work project.
Second, the Surface Laptop aims at picking up higher ed students who, in the past, might have picked up a MacBook Air. Eighty-eight percent of Mac users in the Creative Strategy study said they would pick a Mac if their employer offered them a choice. 9% said they would pick a Surface. Surface was the only Windows-based brand to register any real interest among the overall panel with 16% of Millennials mentioning Surface as the brand they would choose. If we exclude Apple from the brand option and only consider brands within the Windows ecosystem, then the preference for Surface grew to 43%. If you are not convinced, just watch the video Microsoft played at the launch. It is a love affair between you, the user, and Surface Laptop. They could not have made it more personal if they tried. I guarantee you, that is not how a school administrator picks hardware.
Lastly, Surface Laptop can appeal to those enterprises invested in the Windows ecosystem but who are looking for more affordable Surface hardware and a more traditional form factor. If they have not yet embraced Windows 10 apps, enterprises can upgrade Surface Laptop to Windows 10 Pro.
Windows 10 S has a Role to Play Outside Education
While the focus of Microsoft’s event was education, I see Windows 10 S playing a role in other areas as well, although Microsoft did the right thing by not talking about it at the event. People need time to get their head around Windows 10 S and trying to make it something for everybody would have been too confusing.
I see Windows 10 S as the modern implementation of the Windows ecosystem, one that puts Windows 10 apps right in the middle of the experience. Because of this, I see Windows 10 S appealing to consumers who want a mobile-first experience and are not concerned about support for legacy apps. I also see Windows 10 S potentially appealing to enterprises that have already transitioned to a Windows 10 app environment.
From a consumer perspective, I hope to hear more from Microsoft next week at Build how they are planning to help developers invest more in Store apps. This is going to make a huge difference in how users see their devices going forward – productivity only to one-stop device for both work and play. There is no question Microsoft has been putting a lot of effort in first party apps but more needs to be done for developers so the vision of inking, mixed reality, and 3D printing is brought to life sooner rather than later.
As to be expected, a lot of attention was given to Surface and the Windows 10 S but the other tools Microsoft launched today, such as Minecraft Code Builder, Microsoft Teams for education, and the STEM programs and camps really show the full commitment, not just to education but to the next generation of Windows users.
I flew out to New York city for the Microsoft education event earlier this week as I was extremely interested in this new education version of Windows 10 S just introduced. This new OS is a lighter version of Windows 10 and optimized for education. It is Microsoft’s answer to Google’s Chrome OS. Microsoft has sandboxed their app store so only those apps run on Windows 10 S along with any Web apps. You have an option to upgrade to a full version of Windows 10 for $50 but, for education markets, Windows 10 S would work fine.
Sometimes, it seems, digital isn’t better. Sure, there are enormous benefits to working with media, files, and devices in the digital domain, but we are, after all, still living in an analog world. As human beings we still touch things with our hands, hear things with our ears, and see things with our eyes—all of which are decidedly (and beautifully) analog reception devices.
In fact, though an increasingly large percentage of our everyday experiences may start out or somehow exist in digital form, none of our interactions with these experiences actually occur in the digital domain. Instead—though it’s very easy to forget—every one of these experiences happen in an extraordinarily high-resolution analog domain (otherwise known as the real world).
While it may seem odd, and maybe even a bit silly, to point this out, as our world becomes increasingly digitized, it’s worth taking a step back to actually notice. It’s also worthwhile to recognize that not all technology-driven pendulums of change always point towards digital. As technology starts to advance, logically it should actually start to become more analog-like.
Indeed, if you look at the history of many innovations in everything from computing to media and beyond, the evolution has started out with analog efforts to create or recreate certain types of content or other information. Many of these early analog efforts had severe limitations, though, so for everything from computer files to audio and beyond, technologies were developed to create, edit, and manipulate this kind of data in digital form.
For the last few decades, we’ve seen the evolution of digital files and the enormous benefits in organization, analysis, and creation that going digital has provided. Now, however, we’re starting to see the limits even that digital technologies can bring for areas such as entertainment content and certain types of information. It’s hard to really see how adding extra digital bits to audio, photo, and video can provide much in the way of real-world benefits, for example.
Along this path of technological development, many people have also noticed, or more precisely missed, the kind of physical interaction that human beings innately crave as part of their basic existence. The end result has been the rediscovery and/or rebirth of older analog technologies that provide some kind of tactile physical experience that a purely digital world had started to remove.
The best example is probably the case of vinyl records and turntables, which have seen a resurgence of interest even among Gen Z teens and millennials over the last several years. As someone old enough to have an original collection of vinyl, I should be able to remember and appreciate the potential of an analog audio experience. With decades of digital onslaught, though, it’s easy to forget how good the audio quality on a decent turntable and sound system can be. It took a recent experience of someone spinning vinyl at an event I attended to remind me how good it could still sound.
There’s also been a turnaround in, of all things, printed books. Following years of prognostications about the death of print, just this week there was also news that ebook readers and ebook sales were on the decline, while printed books were actually starting to see increases again. Admittedly, an enormous amount of ground was lost here, but it’s fascinating to see that more and more people want to enjoy the analog physical experience that reading a paper book provides them.
Even beyond these examples, there’s still an enormous amount of value that people put into the touch, feel, and experience of using digital devices. The way a device feels in your hand, how the keyboard touch on a laptop feels as you type, all still matters. Looking forward, advancements in both virtually reality (VR) and augmented reality (AR) are going to become highly dependent on some type of tactile, touch-based feedback in order to improve the “reality” of the experience they offer. Recently, we’ve also seen huge popularity towards some older “analog-style” vintage game consoles.
Musicians have always obsessed over the feel and touch of particular instruments and as our digital devices become the common instruments of our age, there’s something to be said for the quality of the tactile experience they can provide. Plus, in the case of musical instruments, one of the biggest trends over the last several years has been the tremendous refound popularity in knob-based, physically controlled analog synthesizers.
Of course, above and beyond devices, there’s the whole debate of returning more of our personal interactions back to analog form. After overdosing on purely digital interactions, there’s growing interest and enthusiasm for cutting back on our digital time and focusing more on person-to-person analog interactions among people of all ages.
Obviously, we’re not going to be re-entering an era of analog technology, as fun and nostalgic as that might be. But as digital technology evolves, it makes sense for technology-based products and experiences to try to recapture some of the uniquely tactile characteristics, feel, and value that only comes from analog.
ESPN last week laid off 100 content-creating employees in a round of cost cuts. While cord-cutting has been widely blamed for the network’s troubles and the resulting layoffs, that’s a bit of an over-simplification. The real challenge is the traditional pay TV bundle is breaking apart and ESPN’s role as a must-have network is starting to crumble. I ran a survey on who watches ESPN last week and the results shed a little more light on the fundamental challenge for ESPN.
Last week, we ran a follow up to the voice assistant research study we published last year around this time. Creative Strategies again partnered with our friends at Experian to see what has changed with voice assistants and explore some new products as well. This year, we added Apple’s AirPods to the study since Siri integration is a key feature of AirPods. In the next few weeks, we will publish more insights around what we learned about the Amazon Echo and Google Home but will focus this article on Apple’s AirPods. We used every available resource to track down as many AirPod owners as we could. In the end, we found 942 people willing to take our study and share their thoughts on Apple’s latest product.
Customer Satisfaction The big story is customer satisfaction with AirPods is extremely high. 98% of AirPod owners said they were very satisfied or satisfied. Remarkably, 82% said they were very satisfied. The overall customer satisfaction level of 98% sets the record for the highest level of satisfaction for a new product from Apple. When the iPhone came out in 2007, it held a 92% customer satisfaction level, iPad in 2010 had 92%, and Apple Watch in 2015 had 97%.
While the overall satisfaction number is remarkable, a second question we asked of these owners stood out even more. We used a standard benchmark question called a Net Promoter Score, which ranks a consumer’s willingness to recommend the product to others. This ranking is on a scale of 0 to 10 with 10 being extremely likely to recommend and 0 being not likely at all to recommend. It was this number that surprised me. Apple’s Net Promoter Score for AirPods came back as 75. To put that into context, the iPhone’s NPS number is 72. Product and NPS specialists will tell you anything above 50 is excellent and anything above 70 is world class. According to Survey Monkey’s Global Benchmark of over 105,000 organizations who have tested their NPS, the average is an NPS of 39.
This incredibly high Net Promoter Score intrigued me for another reason. We know from profiling questions that most Apple AirPod owners fall into the early adopter category. This is not surprising since early adopters are generally the first among people to buy new technology products. We discovered something interesting in the first few sets of early Apple Watch research, as well as our studies on Echo and Google Home — early adopters tend to not give products high recommendations. The first few studies we did on Apple Watch had a lower NPS as did the Amazon Echo and Google Home. Early adopters tend to understand they buy products early and, oftentimes, they do not feel those products are ready for the mainstream. Certainly, a product’s NPS ratings goes up or down over time, but our experience and years of data on this subject are clear that early adopters rarely give new technology products a high NPS. AirPods broke the mold in this case as even the harshest critics and users of new technology (early adopters) felt AirPods are ready for the mainstream.
We asked respondents to briefly explain their ranking and an analysis of the most frequently used words by respondents were:
While those were some of the most common words used by our participants, many general themes in the write-in section were quite telling. Folks raved about the pairing process with their phone. Many indicated how surprised they were by how well they worked citing bad experiences with prior Bluetooth headphones. Another common theme I spotted in the write-in section was consumers saying they did not realize how convenient and useful wireless headphones were since AirPods were their first pair. Many indicated they liked the AirPods even more than they thought they would. That’s always a sign of a great product.
While there was some negativity in the write-in section, it was mostly around concerns or issues with fit or connectivity problems. But these were certainly an extreme minority.
Feature Satisfaction We took the study a little deeper as well, looking at customer satisfaction around certain features.
I charted the top six features with the highest satisfaction. The number that stood out most in this top list of features is comfort and secure fit. There was a great deal of debate about AirPods when they first came out that not having a cable means it will make them not stay in or people will lose them easily if they fall out. We can now dispel that myth as Apple has designed a product that fits most people’s ears and, more importantly, fits securely and do not fall out for the vast majority of owners. Only 4.6% of AirPods owners who participated in the study said they were dissatisfied with the fit and ability to fit snugly.
Consumer Sentiment for AirPods Lastly for the AirPods part of our study, we added some general sentiment questions to see what kinds of feelings or emotions consumers agreed/did not agree with regarding AirPods. A couple of stand out answers are worth mentioning.
84% of respondents strongly or somewhat agree that using just one AirPod at a time makes sense in certain situations. This means AirPod owners are actively using just one AirPod at a time in some contexts. Not necessarily a new behavior if we reflect back to the Bluetooth earpiece days for making calls, but certainly an additional value proposition to Bluetooth headphones as a category.
88.97% of respondents strongly or somewhat agree AirPods consistently pair to their iPhone as soon as they put one in their ear. While Bluetooth reliability has come a long way, we know many Bluetooth headsets on the market do struggle with pairing consistently quite often. This data point suggests instant pairing reliability of AirPods is quite high.
82.5% of consumers would like more control over their content by tapping the AirPods to do things like turn volume up or down or skip to next song. Right now that can be done manually or by asking Siri to turn the volume up or down or skip to next song but it appears some way to have more control of media by touching or tapping the AirPods themselves is desirable.
82% of respondents strongly or somewhat agree AirPods are their favorite Apple products launched in recent memory. What makes this question interesting is the fact that, while our respondents mainly lean to early tech adoption, we do not have a massive group of hardcore Apple fanatics. Knowing that makes this question all the more interesting. Overall, our respondents feel Apple has released one of the best products in a long time.
62% of respondents strongly or somewhat agree AirPods are causing them to consume more audio content (music, books, podcast, etc) than before they owned AirPods. This is fascinating as it could indicate AirPods become a catalyst for more of Apple or third party services.
Lastly, we wanted to see how much our participants in the study still defaulted to old habits or didn’t trust AirPods enough to completely go wireless and fully ditch their wired headphones. To our surprise, 64% of consumers somewhat disagree or strongly disagree they keep wired headphones handy just in case AirPods don’t work.
Apple has accomplished a rare feat we have not seen in many years of studying owners of brand new technology products. They have succeeded at delivering a product with an industry best customer satisfaction rating and Net Promoter Score rating. Those two things alone highlight the quality of AirPods overall and the reality that there will be very few unhappy owners of Apple’s latest product.
In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss this week’s earnings announcements from a number of the tech industry’s biggest players and analyze what they mean for the future of several key tech products and trends.
As a market research analyst, I’m constantly searching for new data points when I read the news, talk with people, or walk the aisles at a brick and mortar store. This week, I noticed something interesting at Costco: There were three in-store displays of PCs designed specifically for gamers. There was a Lenovo Legion-branded notebook ($999), an ASUS Republic of Gamers notebook ($999), and an Acer Predator desktop ($1,299).
Based on my ongoing conversations with PC vendors, component companies, and other retailers, I knew gaming PCs had become a hot topic. But seeing three prominently displayed in Costco drove home the fact PCs designed specifically for gaming have officially moved from a large and very profitable niche to a serious mainstream business. (And that’s leaving aside the serious dollars associated with the rise of eSports, which merits its own future column.)
Incidentally, it’s worth noting many people incorrectly presume Costco shoppers are cheap. They’re not. They’re savvy shoppers willing to spend when a product is worth the money. And they know they can return items that disappoint them; another reason that gaming PCs showing up there is so interesting.
Serious About Play Major PC vendors have long coveted a slice of the gaming PC market, which is why Dell bought Alienware in 2006 and HP bought Voodoo that same year. And, despite the ongoing consolidation in the PC market, boutique gaming vendors such as CyberPower, Falcon Northwest, and iBuyPower are still going strong.
Why focus on gaming? Because in a market where margins are constantly under downward pressure, gamers are often willing (and able) to spend more to get the best. The fastest processor, the highest quality RAM, the speediest SSDs, and top-end graphics. And they want it wrapped in a slick chassis, with a high-quality display, keyboard, and input device. For many years, gamers insisted on desktops (often self-built), which let them swap out components to stay on top of the latest technology. Today, an increasing number are shifting to high-powered gaming notebooks.
Gaming is unique in that it is one of the last areas where high-performance PC components directly impact the quality of the consumer experience. In just about every other consumer-centric use case, the pain points are more likely the network than the hardware. But, when you buy a top-shelf gaming PC, you see a direct benefit in terms of frame rates and quality of play.
Spend More, More Often My IDC colleague Linn Huang recently completed a very interesting survey of U.S. consumers in which he asked many deep-dive questions about notebook and desktop past purchases, usage, lifetimes, and replacement plans. He also asked about gaming and captured some great data points from self-proclaimed gamers. Most notable: Respondents who self-identified as hardcore gamers on average spent about $875 for their current desktop or $776 for their current notebook. Self-identified gaming enthusiasts spent $810 and $735 respectively. Meanwhile, those who identified as casual gamers spent $698 and $590, while non-gamers said they spent an average of $669 and $660.
The fact gamers will spend more to buy a new notebook or desktop is reason enough for the PC industry to be paying close attention. But IDC’s survey also reflected another key element: They’re likely to buy new PCs more often, too.
When we asked respondents what typically triggers the need to buy a new notebook, 65% of non-gamers said they replace a notebook “when it wears down or breaks,” while just 15% of hardcore gamers chose that answer. The most common reason (24%) hardcore gamers said they replace their notebook? “I replace my notebook when a new technology comes out that warrants an upgrade.”
That’s the kind of customer any business wants. And today’s gamers are leading the charge in a new area that also requires high compute: virtual reality. This year, we’ll see vendors ship new VR products designed to drive a good experience using a less powerful PC. But for the foreseeable future, I expect the very best VR experience to occur using a high-end gaming PC. I’ve been using Dell’s Alienware 13 to test the HTC Vive VR rig, and it drives a great VR experience. Starting price for the notebook: $2,049.
The final reason PC companies are so keen to grab a portion of the lucrative and growing gaming PC market? It’s an area where Apple continues to decline to participate and there’s no reason to believe that will change any time soon.
This week saw a furor over Unroll.me, a service which offers to unsubscribe users from unwanted emails but which apparently sold user data to Uber in the past in a way that wasn’t transparent to users. The reaction to the revelations was predictable: some decried all ad-based business models using cliches like, “if you’re not paying, you’re the product”, while others said users were naive for imagining a free service wasn’t monetizing their data in some way. Every time I see this happen, I wish we could get beyond the simplistic painting of all ad-based services with the same brush and have a more nuanced conversation about ad-based business models.
I wrote a piece for Techpinions almost three years ago about business models and it’s worth referring back to it. In that piece, I talked about three broad categories of business models and the implications each has for what I called user/customer alignment. What I meant was that, under some business models, users and customers are the same people. Under others, the paying customers and the users are actually different sets of people. When the latter happens, there can sometimes be tensions between the needs of those two sets of customers over privacy in particular but also over other issues. That’s particularly the case for ad-based business models which rely on learning as much as possible about users in order to better serve advertisers.
That’s a tension many users are willing to live with in return for what’s usually a free or heavily discounted service. Google’s seven billion user products (Gmail, Android, Chrome, Maps, Search, Youtube and the Google Play Store) all. to some extent. rely on capturing user data to drive its ad business. But none of them would have a billion users unless those users found some value in the service and were willing to make some tradeoffs in terms of being tracked and shown ads. There’s a reasonable argument to be made that not all users understand those tradeoffs fully, but our recent privacy surveys (covered here and here) suggest most users actually do have a decent understanding and are willing to make these tradeoffs anyway, while a minority eschew these services because they’re not willing to do so.
Misunderstandings over Data and Ad Businesses
Importantly, though, ad-based businesses almost never sell user-identifiable data to third parties. That’s not their business model and it would be counterproductive. Instead, they typically either aggregate or anonymize that data before selling it or don’t sell it at all but rather simply use it to target advertising. Even Unroll.me wasn’t selling identifiable user data because Uber only wanted to know how many people were using Lyft, not which individuals were. It still breached users’ trust by looking into the content of emails in a way users didn’t know it would but that’s technically a different issue.
The recent blowup over ISP privacy regulations also led to some comically bad misrepresentations of what ISPs might do with users’ data, with one prominent individual offering to buy individual Senators’ browsing history, as if such a thing were possible (it isn’t). But that doesn’t stop people from ignorantly or deliberately misrepresenting what’s happening with ad- and data-based business models.
Another aspect of ad-based business models we’ve seen in recent months is actually yet a different form of tension. This time, not between the end users and advertisers, but between creators and advertisers. We’ve seen that tension in the boycott of YouTube and Google over ads appearing next to problematic content. In attempting to resolve these conflicts, Google has repeatedly sided with advertisers over creators in tightening standards for where ads can appear, both on YouTube and in its AdSense program, all of which has affected even legitimate creators’ ability to monetize their content.
The desire to sell advertising can therefore sometimes lead ad-based businesses to put users and creators of content second. But, whereas users have few alternatives to YouTube — by far the biggest online video site in the world — creators are starting to find alternatives for monetizing online video. But those alternatives are mostly other big ad-based businesses like Facebook, so the cycle is likely to continue to some extent.
Direct Monetization Solves Most of these Issues
The other two business models I mentioned in my original piece were direct business models – where the company sells a product directly to end users – and platform business models under which the company sells third party products and services to end users. Both of these have better user/customer alignment, with direct business models having 100% alignment between those two groups. Platform business models can still create some tensions, typically between the platform owner and the content owners over the revenue share or cut the platform takes of gross revenue. But the direct business model solves most of these tensions by making the value proposition to the user simple: buy a product or don’t.
This straightforwardness makes direct business models attractive to many – you know what you’re getting and you choose, at every step of the way, whether you want to continue to pay for the privilege. But it may mean paying more for the product in some cases because it’s not being monetized in other ways, although that’s again a tradeoff many customers are willing to make. On the other hand, some businesses try to mix the two, sometimes with bad results – Google’s recent introduction of paid promotion on its Google Home device is an example. When people pay for a hardware product like this and there’s no mention of advertising at the point of sale, it feels like much more of a betrayal when it does show up because it wasn’t part of the bargain.
The Price/Tension Equation is Key
That price/tension equation is key to the fight over the future of consumer technology. Of the biggest tech companies, some are choosing to go down the direct business model path, with Apple, for example, largely abandoning advertising as a business model across its products in the last year or two, while others, like Google and Facebook, continue to derive the vast majority of their revenue from ad-based models. Each will find an audience willing to make the tradeoffs inherent in their business model, whether sacrificing some privacy for a low price or paying a premium to avoid making that sacrifice. But I expect we’ll see many more examples of the tensions inherent in ad-based business models as the consumer technology industry expands into markets where many don’t have the means to pay the privacy premium.
In the late 1990s, I had the privilege of serving as an advisory board member to Xerox Parc’s Venture arm. Our charter at the time was to go into Xerox Parc and look at what their many scientists were creating and see if they had any potential for commercial applications. This was in the early days of the internet and Xerox Parc had been developing both new software and hardware technologies the parent company wanted to try and either license or sell to other companies.
Last week, Amazon was awarded a patent for an on-demand manufacturing system designed to quickly produce clothing and other products — linen and curtains and such — only after they have been ordered. Amazon applied for the patent in late 2015 and, since then, they has been growing their fashion inventory as well as its own clothing brands. According to a Bloomberg report published in September 2016, Amazon was named the biggest online clothing seller. Amazon got to that position by adding items directly proportional to the confidence consumers had in buying online. Starting out with shoes (easy to size) and T-shirts (a relatively modest investment and also easy to size), Amazon grew its range, building from basic items to fashion powerhouse names such as Kate Spade, Vince, Ted Baker, and Michael Kors, just to name a few.
According to a recent report on commerce by GWI, 20% of online consumers in the US bought clothes online in the last quarter of 2016. Another 14% bought shoes. If you don’t think that’s significant, what if I told you that only 14% of consumers bought online the item that “killed” brick and mortar stores: books.
Consumers are becoming more comfortable with buying clothes, shoes, and accessories online but new ways of selling and new technologies can push this market even further by making the whole experience more personal.
Fashion as a Service
Subscription services in shopping have been growing in popularity over the past few years. What in most cases started with organic fruit and vegetables, soon developed to include razors, toothbrushes, dog treats, toys and, more recently, fashion items. Several companies deliver shirts and lingerie on a monthly or quarterly basis to happy but busy customers who like the consistency of a brand they love being delivered to them.
But the model is changing. While Uber and Lyft are getting all the publicity for revolutionizing transport and possibly drive – no pun intended – consumers away from owning cars to simply ordering a car, fashion has also been moving to a more hybrid subscription rental service. Le Tote is a good example of a successful service. They deliver a tote with items based on style and fit as well as personal preferences. You wear anything in your tote for as long as you want, then send it back when you are done, ready for a new order. If there is something you like, you can keep it and buy at a discounted price.
The ability to change your wardrobe collection often with trendy clothes that fit your lifestyle needs coupled with the convenience of delivery is certainly something busy women, or women who do not enjoy the shopping, experience can appreciate. Adding further customization to the fit of the clothes would drive more people to try this kind of service and is where new technologies such as AR and connected sensors can play a role.
Visual Computing and the Buying Experience
With Augmented Reality and Virtual Reality coming to our phones and PCs, we see the potential for shopping experiences to be redefined. For example, being able to see on your walls how a color you picked will go with your furniture, size a new sofa in your family room or try your new car on for size without having to go to a dealership, is becoming a reality thanks to VR.
The possibilities are endless and fashion can benefit from this too. Already today there are apps that allow you to try an item on, such as glasses or a hat, via a picture of you. There have also been services that will ask you questions about your size, weight, ethnicity, pants, and collar sizes then offer what they claim is the closest thing to a tailored garment. Some use a combination of the two methods and marry your inputted information with your picture to come up with a custom solution. Custom clothing company MTailor takes it a step further and offers an app that can measure you with the camera on your phone and deliver custom shirts, suits, and jeans.
These solutions have been relying on 2D pictures and inputted info which have plenty of room for error. With smart fabric and sensors being added to clothing, there are more options now to properly measure size and use that information to find the right clothing. LikeAGlove started a couple of years ago to use leggings to measure your shape and then transfer the data to an app. Aimed at people who are on a fitness program to lose weight, they claim to better measure your progress compared to a scale that would not help you measure how your body shape changes as you lose the pounds. The app also offers help in finding the jeans brands and models that best fit your shape.
If you combined sensors for shape tracking and AR, you could see how certain designs would look on you and then have them tailored to your shape then custom-made and delivered. Amazon announced today Echo Look, an Alexa-enabled camera that lets you take pictures and short videos using built-in LED lighting and a depth-sensing camera with computer vision-based background blur. Echo Look will let you see yourself from every angle and offer a second opinion, thanks to AI, on which outfit is best as well as suggest brands and items based on the images you collect in your style book.
Bots and Digital Assistants as Stylists
With so many businesses focusing on bots and big ecosystem players focusing on Digital Assistants, I would expect both will be able to serve my needs when it comes to shopping for clothes and accessories. Store-dedicated bots could help navigate through the latest collections or cross-store bots could fetch the item I want/need at the best price and delivery option. Offering a personal shopper that has information about your tastes, as well as look and size, could be a differentiation customers are either prepared to pay for or see as an added benefit in an all-included service. The focus here would be more on an actual shopping experience rather than on tailored clothing for those consumers who do enjoy shopping online and like to do so efficiently but, most importantly, they want to know they bought what best fits their needs.
For a more customized experience that shifts from a personal shopper to a “lady in waiting”, think how great it would be if my assistant could suggest my daily outfit based on weather and the appointments on my calendar. That would be the perfect solution to busy people who do not want to default to having to wear a gray t-shirt every day.
There is no question technology will continue to change the way I shop for clothes. What I want is for tech to help me find what I need, what fits, and what is best priced, all nicely wrapped up in a box, delivered to my door. Tech might still fail to make me a fashionista but it would have succeeded in making me a very happy shopper.
Of all the futuristic technologies that seem closer to becoming mainstream each day, robotics is the one that is likely to elicit both the strongest and widest range of reactions. It’s not terribly surprising if you really think about it. After all, robots in various forms offer the potential for both the most glorious beneficence and the most insidious evil. From performing superhuman feats to the complete destruction of the human race, it’s hard to imagine a technology that could have a more wide-ranging impact.
Of course, the practical reality of today’s robots is far from either of these extremes. Instead, they’re primarily focused on freeing our lives and our businesses of the drudgery of mundane tasks. Whether it’s automatically sweeping our floors or rapidly piecing together elements on an assembly line, the robots of today are laser-focused on the practical. Still, whenever most people think about robots in any form, I’m guessing visions of dystopian robot futures silently lurk in the back of their minds–whether people want to admit it or not.
We can’t help it, really. We have all been exposed to so many types of robotic visions in our various forms of entertainment for so long that it’s hard to imagine not being at least somewhat affected. Whether through the pioneering science fiction novels of Isaac Asimov, the giddy futurism of the Jetsons cartoons, the hellish destruction of the Terminator movies, or countless other examples, we all come to the concept of robotics with preconceived notions. Much more than with any other technology, it’s very difficult to approach robotics objectively.
Now that we’re starting to see some more interesting new advances in robotics-driven services—such as food and package delivery and, eventually, autonomous cars—the question becomes how will those loaded expectations impact our view and acceptance of these new offerings. At a simplistic level, it’s easy to say—and likely true—that we can accept these basic capabilities for what they are: minor conveniences. No need to worry about robotic delivery carts causing much more damage than scaring a few pets, after all.
In fact, initially, there is likely to be a “cool” factor of having something done by a robot. Just as with other new technologies, it may not even matter if it’s the best or most efficient way of achieving a particular task: the novelty will be considered a value unto itself. Eventually, though, we’ll likely start to turn a more critical eye to these capabilities, and only those that can offer some kind of lasting value will succeed.
But the real challenge will come when we start to combine robotics with Artificial Intelligence (AI) and deep learning. That’s where things can (and likely will) start to get both really exciting and really scary. The irony is that to achieve the kind of “Asimovian” robotic benevolence that our most positive views of the technology bring to mind—whether that be robotic surgery, butler-like personal assistant services, or other dramatical beneficial capabilities—the machines are going to have to get smarter and more capable.
However, we’ve also seen how that movie ends—not well. Though admittedly a bit irrational, there’s no shaking the fear that we’re rapidly approaching a point in the evolution of technology—driven by this inevitable blending of robotics and software-driven machine learning—where some really big societal-impacting trends could start to develop. We won’t really be able to recognize them for some time, but it does feel like we’re on the cusp.
Of course, there is also the potential for some incredibly positive developments. Removing people from dangerous conditions, helping extend our ability to further explore both our world and our universe, letting people focus on the things that really matter to them, instead of things they have to do. As we move forward with robotics-driven technological advances and transition from science fiction to reality, the possibilities are indeed endless.
We should be ever mindful, however, of just how far we are willing to go.
At this point, the rumor mill surrounding Apple’s next iPhones, expected to be released in the fall, is well underway. There’s some consensus emerging around what we’ll see, at least in broad brush terms, but lots of details are still murky. Given what we seem to know at this point, I think there are a few big dilemmas Apple faces with regard to the positioning of the new phones.
There was an interesting article in the Atlantic that dove deep on how online shopping is causing such turmoil for brick and mortar retailers. It’s a good, long read. A paragraph stood out to me as the key to this story.
The last few years have seen a fascinating shift in storylines as well as data around the storylines. Many of us who research consumer trends in the industry focus quite a bit on the endpoint because they serve as gateways to broader software and services experiences. For this reason, our eyes have been squarely on studying what people do on smartphones, PCs, and tablets. Since 2010, when the iPad hit the scene, the role of the PC has come under great scrutiny. Is it a dying form factor? Is it something consumers no longer need? Is the smartphone the only device humans will use someday? Will the tablet kill the PC? These questions, and many more, have been a focal point in the consumer hardware discussion.
The debate is relevant because it informs businesses on where to focus their resources. It is abundantly clear the smartphone is the central and primary computing device for billions of people. Knowing this means any business should no doubt employ a mobile-first strategy with their software and services. Mobile-first simply means to assume the smartphone is the primary engagement point with your product. Of course, this will vary by the type of application. Something like Netflix for example, is primarily consumed on larger screen devices like PCs, TVs, and tablets. Microsoft Office and other enterprise or commercial applications are primarily used on PCs and Macs. In all these cases, where the application and workflow are better on larger screens, they still have a complimentary mobile experience. We live in a multi-device world where most humans in developed markets like the US, Europe, China, etc., use both a PC and a smartphone for varying things throughout the day. But, because the smartphone is the computer we have with us at all times, it is crucial for even PC-first applications to have complimentary experiences on the smartphone.
But, when it comes to consumer software and services, the strategy gets flipped. Mobile-first, or mobile-only, has been the mantra for developers and consumer software strategists for the last few years. But I’d like to argue that even many of these mobile-only apps or solutions can benefit from a complimentary PC experience as well.
Interestingly, global data tells us the PC is still used heavily on a daily basis across nearly all demographics.
As you can see, the amount of time spent per day on PCs is still significant. Our estimates are that ~1.3 billion people personally own a PC, compared to the nearly three billlion people who own a smartphone. The global average of time spent using a PC each day by those ~1.3 billion people is 3.54 hours per day. What became clear a few years ago was the smartphone was not necessarily taking time usage time away from the PC but was adding to the total time spent using devices and being on the internet each day by its owners. Looking back through years of data, daily time spent using a PC has stayed roughly flat while daily time using a smartphone has grown dramtically. People seem to be using both devices independently and in tandem to browse the web more, communicate more, play games more, watch videos more, be on social media more, shop more, etc. It is also important to note that globally, millennials still spend a lot of time on their PCs as well. The fallacy is to think the only way to reach millennials is with a mobile app. While a mobile app is the primary way to reach millennials, the data suggests it would be a mistake to not also offer them some way to engage with your software or services while they are at their PC as well.
The PC is still an important engagement point even in the mobile-first era. However, the strategy for bringing mobile experiences to the PC needs to understand and utilize the device’s benefits. The worst thing any developer or business can do is just duplicate their mobile strategy for the PC. These hard lessons were learned when many apps and services failed because they just duplicated their desktop experiences on mobile and did not take advantage of the smartphones unique advantages.
If you agree with my logic, the debate will turn to whether just make a website or make an app. To me, the path is clear — make an app. Both Windows and Apple offer app stores and, in many cases, the ideas I’ll share make more sense as an app rather than a browser experience. Take Twitter for example. Twitter is a mobile-first experience and a primary engagement point. Yet, the website and their own desktop client app are pretty poor in comparison to other client side apps for macOS and Windows 10. I’d argue Twitter is losing a significant engagement point on the PC, given how much time people spend browsing the web for news and entertainment while on their PCs. Thinking of millennials, Snapchat is another example that comes to mind. We know millennials spend a lot of time on their PCs and millennials with Macs engage quite heavily with iMessage on their Mac. The value of being able to text and message friends from the device you are in front of, in this case the PC, makes a lot of sense. Snapchat’s chat app is the sticky point for many millennials. I can argue even if Snapchat brought their chat client to the desktop it would make a lot of sense. The counter-argument is to say it isn’t that hard to pick up your smartphone and open the app and do what you want to do. However, having observed a range of consumers who have both desktop and mobile apps of the same software, there is no arguing that being able to do what you want or need to do on the device you are using is far superior. While it seems easy enough to just pick up your smartphone to use an app you don’t have on your desktop, it misses the reality of the increased friction in that experience. I use Slack for example for a wide variety of work and personal things and if Slack was not available on the desktop I would not use it nearly as much as I do.
I can see many cases where Instagram could benefit from a smart desktop app. Maybe Facebook could as well or, at least, bring Facebook Messenger to the desktop as an app. Most companies want to just offer a browser-based PC experience but, in that scenario, your experience just gets buried in the many number of tabs consumers have open at any given time. Don’t make your PC experience just a tab in a browser — it will get lost. Apps offer rich notifications and a more visual experience. For this reason, I think the best strategy to re-engage with your customers on the PC is via an app, not a website.
Being mobile-first is the right strategy. Prioritize the mobile experience when you know that is the primary way your customers will engage. Just don’t forget your customers also spend many hours per day in front of their PCs and, in some cases, it is wise to think about how best to offer a complimentary PC experience in the hope you can increase your total engagement time with your customers.
In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the wide range of developments from this week’s Facebook F8 Conference, as well as rumors that Apple may be developing a tool for monitoring diabetes.
This week, AMD released the latest in its family of Zen processors, the Ryzen 5 series. Targeted at DIY consumers and OEMs with retail prices ranging from $169 to $249, Ryzen 5 can address a much wider segment of the market than the Ryzen 7 processors launched last month that are priced as high as $499. The competing Intel processors in the Core i5 family sit in essentially the same price segment of the market but AMD Ryzen has a significant advantage in thread count with all released parts enabling multi-threading. Though Zen is at a deficit in per-clock performance compared to Intel’s Kaby Lake, a 2-3x improvement in threading capability offers substantial headroom for application performance.
Platform Value Intel has had years of consumer mind share and channel market share in this segment without competition and AMD understands it needs to do more than just equalize metrics to make any significant market share moves. On top of the thread and core count advantages Ryzen 5 offers over Core i5, the chipset and motherboards based on the AMD B350 chipset offer value-adds. The B350 chipset includes the ability to overclock both the CPU clock speed and memory for the Ryzen platform, all while adding support for interface technologies like M.2 NVMe SSDs and USB 3.1 connectivity. Intel’s competitive solution for low-cost motherboards is the B250 chipset but it locks consumers out from overclocking of any kind.
It’s good AMD they decided to make the decision to allow for overclocking the B350 chipset. Testing has proven that increased DDR4 memory speeds can have a dramatic impact on the performance of some applications, especially games. Given the controversy surround the Ryzen 7 processor and gaming, any avenue AMD can offer to improve this area is welcome.
Consumer Performance Direct performance comparisons of Ryzen to Core start with the Ryzen 5 1600X and the Core i5-7600K. Having 6 cores and 12 threads on the 1600X gives AMD performance leads over the 7600K (4 cores and 4 threads) we haven’t seen in nearly a decade when the Athlon first hit the market. Applications like Blender (used for 3D rendering) and Handbrake (for media creation and transcoding) show the power multi-threaded workloads can tap into on a Ryzen CPU. Even the 4 core, 8 thread Ryzen 5 1500X (priced $60 lower than the 1600X) is able to outpace the Intel CPUs in this segment.
Single threaded performance still belongs to Intel and its Kaby Lake architecture. Synthetics and a few applications like Audacity audio encoding bear this out and, though there aren’t many benchmarks that make the case, real-world experience and user interfaces are very often single thread limited.
One of the Achille’s heels of AMD’s initial Ryzen 7 processor launch centered on PC gaming at lower resolutions like 1080p. The story remains mostly the same for Ryzen 5 where the Core i5-7600K demonstrates better performance in most of our testing. In a few cases, particularly with “Ashes of the Singularity” and “Hitman”, the Ryzen 5 1600X is able to hold its own, matching the results from Intel. AMD was able to show the potential benefits of optimizing game engines for Ryzen through the Ashes developers, netting a 31% overall improvement at peak. The difficulty for AMD will be getting a wide array of game developers and engine developers to do the same and spend time and money to make the changes necessary for more highly threaded processors.
Intel Reaction Intel, for its part, has remained publicly silent about the moves AMD is making with Ryzen. Many in the industry and DIY community have accused Intel of sitting on the market, unmoved to improve performance in the areas important to them without competition to push them down the path. The validity of that opinion is tempered by knowing Intel has focused most of its resources on the mobile markets (both smartphone/tablet and notebooks). Both process technology innovations and architectural shifts on Intel processors have been built to lower power consumption and improve instantaneous performance.
There is some buzz that Intel might be moving up the roadmap for forthcoming refresh processors in the desktop space to address the competition. I do not expect Intel to adjust current pricing of Core i5 or Core i7 processors in response to AMD but I do see Intel making specification and price adjusts with the next-generation processors to accommodate the evolution Ryzen has brought to the market. Expect more cores, more threads, and lower prices from Intel.
AMD has been able to deliver on its promise of a competitive consumer processor with both Ryzen 7 and Ryzen 5. Though it suffers from a potential pitfall with gaming performance currently, in any multi-threaded workload, Ryzen 5 stands out from Core i5 and does so in a dominating fashion. As the consumer software space continues to adapt to multitasking and highly threaded application workloads (AI, computer vision), AMD will continue to have the advantage.
In June, we will commemorate the 10th anniversary of the release of the iPhone. In recognition of this signature date, there’s more than the average amount of speculation on what the 2017 edition of the iPhone will sport and hope it might revitalize the smartphone sector, which is experiencing somewhat of a slowdown.
I have no doubt the iPhone 8, X, or whatever it might be called, will be terrific – as nearly all high-end phones are today. Samsung, with its launch of the Galaxy S8 line last week, pushed the envelope even further, particularly with respect to screen size/display, and innovative features such as DeX.
But what has historically given Apple that cachet and ability to charge a premium for its products is the “ecosystem”. When at the top of its game, Apple’s hardware, software, apps, and media all work magically and seamlessly together. However, even more than the commoditization of the smartphone category, there has been a slow and steady erosion of the vaunted ‘Apple Experience’. This mainly has to do with Apple’s software and services, where the company has lost some of its edge. iTunes, which is now 16 years old, has become bloated– more of a turn-off than a turn-on. Apple’s signature applications such as e-mail/contacts/calendar, photos, music, and TV are all OK, but they’re not great. iCloud has not completely fulfilled its mission and an increasing number of Apple users see the whole iTunes/iCloud/Music blend as sort of a hot mess.
All the while, Google has steadily gained. I’d argue devices and software in the Google/Android/Chrome world now work and sync more seamlessly than in the Apple/iOS/macOS world. Amazon has become the high beta company in tech, with keen innovations and successful products in hardware and software, while exploring new frontiers in areas such as AI. And Microsoft has staged a comeback of sorts, with successful transitions in cloud and a better reimagining of the ‘post-PC’ world, even without a smartphone product.
Apple’s recent hires and actions signal a new recognition and urgency. The company hired Shiva Rajaraman from Spotify to help reshape the music and video experience, new Apple TV executive Timothy D. Twerdahl was hired away from Amazon, and it appears the Mac Pro and iMac line will be getting more love. Reshaping the software and services experience seems to have become a priority.
So, what would a reimagined Apple experience look like? I suggest five pillars:
1. Revamp or Ditch iTunes. This product has had pile after pile of updates and refreshes but seems outdated and disjointed from Apple’s music, video, TV, and photo offerings. What, really, is the role of iTunes in a world of App Store, Apple Music, Apple TV, and iCloud? It should be renamed since today it’s mostly a store and ‘control center’ for settings and management of multiple devices (though some of that has been subsumed by iCloud). The user interface needs to be re-imagined and navigation/synchronization made simpler and more intuitive.
2. Improve iCloud. I feel like iCloud has changed from something as the place all content is shared and safely stored to something that must be managed and is needlessly complex. Many consumers still aren’t fully comfortable with ‘cloud everything’ and how content moves on and off the device. Apple isn’t doing itself any favors here. Example: when you enable ‘family sharing’ for music, you are then told to “delete” your music and then “turn on iCloud” which will ‘restore’ your content. For any consumer who, at some point has lost a hard drive, failed to do a backup, or somehow hasn’t gotten this cloud thing right (i.e. most of us), this is a moment fraught with anxiety.
3. Determine What’s Next with Mail, Contacts, Calendar. These are signature productivity apps but Apple’s versions now seem more workmanlike. Is there something here that could revitalize the category and ‘delight’ rather than merely ‘satisfy’? Despite all the messaging alternatives, it still looks like email is here to stay.
4. Continue to Invest in the PC. Stagnant tablet sales, innovative new combo products on the Windows side, and growing success of Chrombeooks show the ‘post-PC’ world has not evolved in quite the way the late Steve Jobs imagined. The PC will still be the anchor productivity device for the foreseeable future, as shown in a recent survey by Creative Strategies, Inc on Millennials’ device preferences. Apple has work to do in figuring out how the PC and macOS fits into its world going forward. I’ll also go out on a limb and argue this is one category where Apple should consider relinquishing its insistence on having premium products at super-premium prices. One, because in the current product line, it’s not justified. And two, because they don’t want to cede the entire under-30 generation to other platforms. It might not be such a bad idea to have a solid but more affordable Mac product to keep folks fully bought into the Apple ecosystem.
5. Regain the Service Halo. This is harder to quantify but my sense is Apple’s size, and intense pressure to grow, has created the perception the company tries to extract one’s dollar at nearly every opportunity. There was a time when you could get customer service help on the phone without having Apple Care (if you asked nicely). Or, if you brought in a cracked screen a month after you bought the latest iPhone, a ponytailed Apple Store employee would wink and hand you a new one, no questions asked. You felt like Apple had your back, in a way that felt different than other companies and justified, in part, the premium price for their products.
Ten years after the launch of the iPhone, the core of Apple is still very much there. But Silicon Valley’s other biggies – Google, Microsoft, Amazon, Facebook, and Netflix – are all now more significant forces in software, content, and services, making it more challenging for Apple to be in a class by itself as it was for a few years. Which makes me hope that Apple’s tenth anniversary iPhone is about more than just the phone.
I’m writing this column on a plane on my way home from attending Facebook’s F8 developer conference. More than any other developer conference I attend, Facebook’s is a crazy mix of near-term feature upgrades across its growing portfolio and out-there R&D work which won’t deliver real-world results for years to come. It also highlighted something of a chasm in Facebook’s innovation strategy, with its near-term focus on cloning competitors’ apps and features on the one hand, and mind-blowing research on the other. What Facebook needs, more than anything else right now, is to take the kind of thinking that’s driving its ten-year roadmap and put it on a shorter-term timeframe.
An Event of Two Halves
English soccer commentators are fond of referring to the sport as a game of two halves, meaning the two periods in the game can turn out completely differently and what happens in the first may be a poor predictor of what happens in the second. Facebook’s F8 was very much an event of two halves, with its two keynotes very different in their focus and tone.
Day 1 – Innovation by Proxy
Tuesday’s kickoff was dominated by here-and-now announcements about products Facebook and its developers are working on today. The first part was about all the ways Facebook has made cameras central to its apps in recent months and how it’s now going to evolve those cameras with an AR platform called Camera Effects. It went on to cover social VR and the Facebook Spaces app that’s launching for Oculus. It then ended with a discussion of how its Messenger Platform is evolving from last year’s somewhat misguided launch of bots.
All of this was about products consumers can use and developers can build for either today or in the very near future and much of it felt like stuff we’ve seen before, with minor tweaks. The AR platform is very reminiscent of Snapchat’s filters products, although opening it to developers rather than merely advertisers is a new twist. Facebook Spaces is an evolution of what was shown on stage at last year’s event and mimics other social VR products we’ve seen from smaller companies in the past. And Messenger’s second attempt at a platform feels a lot like some of the Asian messaging apps that have long done well in this space and, as such, is a lot less original.
It was easy, therefore, to come away from the day one keynote feeling Facebook has forgotten how to innovate, how to create truly new experiences and ideas and, ultimately, how to move its products forward without relying on features invented elsewhere. Granted, none of what was announced was bad. I think the AR features will be very popular if they live up to the concepts Facebook demoed on stage, the new version of the Messenger Platform feels much more focused and realistic in its aspirations, and Spaces is a decent proof of concept even if not yet a compelling social VR experience. Indeed, because so many of the ideas presented have been successful elsewhere, it’s easy to imagine them being that much more so with Facebook’s massive audience and network.
Day 2 – Mind-Blowing Ideas and Ambition
By contrast, then, the second day’s keynote was full of long-term thinking, massive ambition, and out-there ideas. I think the most frequent set of words mentioned by the various presenters was probably “years away” or words to that effect. Zuckerberg touched on the company’s ten-year roadmap – unveiled last year – during his slot on day one, and much of the day two stuff belongs late in the second half of that roadmap. Some of it may never even see the light of day.
But what characterized day two’s keynote announcements and discussions was their sheer difference from what’s been done before. While other big tech companies focus on evolving current user interfaces with combinations of touch, voice, and mixed reality, Facebook is dabbling in brain-computer interfaces, communication via neurons and skin sensors, rethinking communication networks, and more. If day one was all rather familiar, day two was familiar only in the sense we’ve seen some of this stuff in science fiction movies.
The creativity and imagination on display on the second day made lots of think pieces published Tuesday night and Wednesday morning about Facebook’s lack of innovation seem silly. Headlines later in the day on Wednesday gaped at Facebook’s ambitions to connect to your brain and talk through your skin. The contrast between the reactions to day one and day two is stark.
Bridging the Chasm
What we have, then, is a chasm between Facebook’s seeming inability to be imaginative in the short term and an abundance of creativity in its long-term thinking. What happens between the audaciousness of the company’s ten-year thinking and the reality of what gets released tomorrow that makes the here and now so much less interesting? Why does Facebook seem unable to innovate in such impressive ways in the short term when it’s clearly capable of that kind of imagination when freed from time constraints?
I suspect two things are going on. First, Facebook’s efforts here and now are constrained not just by time but by its current strategic and tactical priorities. Yes, it might like to do lots of things but, in the present, it’s competing with Snapchat, Twitter, Google, and others for users’ time and advertisers’ dollars and that drives certain imperatives, such as trying to win share of time back from interlopers, maximizing ad inventory, driving new revenue streams, and so on. Those prosaic short-term objectives drive tactical actions like cloning Snapchat features, pushing ads into new places across Facebook’s family of apps, and trying to tie together disparate parts of the business like social networking and VR.
But I don’t think that’s the whole problem. The other half of the problem is Facebook is now operating at such a massive scale and has had so many bad experiences in the past with big changes, it’s actually a little scared to innovate in big ways. When you have two billion users across all the countries in the world and dozens of languages, any small change is that much harder. That hasn’t stopped Facebook from shoehorning new features into the interface and I wrote recently about how Facebook has pushed some things too hard in ways that were user hostile, but those changes have again mostly been the unimaginative cloning ones rather than true innovations. Facebook seems to have lost some of its daring in moving its products forward, which is just the kind of “Day 2” thinking Jeff Bezos said he wanted to avoid in his recent Amazon shareholder letter.
What Facebook needs, then, is to allow some of the creativity and ambition that infuses its long-term R&D efforts to bleed back into its shorter-term product roadmap. To give its employees freedom to innovate in more dramatic ways and serve, not just today’s tactical priorities, but longer-term strategic ones too. And to start really inventing things here and now in real products and not just R&D projects with ten-year time horizons. Moonshots are great for burnishing a company’s innovation credentials but if that innovation is absent from the short-term product roadmap, it starts to look like the moonshot factory is not just in its own building but almost a separate entity entirely. That was the impression I was left with at the end of this year’s F8.
With the upcoming availability of the Samsung Galaxy S8, we were curious what consumers thought of the device and how interested they are in purchasing one. We teamed up with SurveyMonkey Audience to do some research on US consumers to better understand their interest level of the phone and its newest features. We also explored whether the Galaxy Note 7 battery issues were a factor in consumer interest and we threw some questions in around voice assistants for good measure. In all, we surveyed 923 consumers. These are the key findings.
Note 7 Battery Impact In our annual fall smartphone study, we explored the issues surrounding the Note 7 and whether or not the media coverage and awareness of the battery problems led to a large amount of negative sentiment. In that study in the of fall 2016, we learned most consumers (62% to be exact) did not see the Note 7 battery fires as a deterrent to purchasing a Samsung smartphone in the future. It was even higher looking at existing Samsung smartphones owners where 73% said the Note 7 issues would not deter them from purchasing a Samsung smartphone in the future. Knowing Samsung customers are a loyal bunch, we feel both those percentages are good news for Samsung.
In this most recent survey, we found similar results. This study revealed 53% of consumers said the Note 7 issue has not impacted their interest in the Galaxy S8, while 17.7% said they were not sure or undecided. Only 28% said definitively the Note 7 battery problems negatively impacted their interest in the Galaxy S8. Again, knowing Samsung owners are a loyal bunch and are the most likely candidates to buy a S8, only 16% of existing Samsung smartphone owners said the Note 7 problems are impacting their interest in the new device.
Overall, I’m confident the data we have from the fall, and this most recent data, suggests the Note 7 fires were never a big roadblock for consumers to begin with and even less so now. This should alleviate any concern over the Note 7 fallout impacting the sales of any Samsung smartphones released this year.
Interest in the Galaxy S8 Overall, interest in the new S8 seems low. However, I expect Samsung to begin their marketing blitz and carriers to start heavily advertising the S8 in the coming weeks and months which will help with interest over time. The more important breakdown to this question is to look at interest in the S8 by existing smartphone owners and those looking to upgrade in the next three to six months.
Interest remains highest among existing Samsung smartphone owners than any other group of consumers. More importantly, drilling down on folks who expect to upgrade their smartphone in the next 3-6 months, 36% of upgraders in that time frame are interested in the Galaxy S8. Interestingly, 21.7% of upgraders in that time frame stated they were extremely interested. These are consumers looking to upgrade sooner rather than later and are not interested in waiting until the fall to upgrade. Again, the fact 36% of consumers are looking to upgrade are interested in the new Galaxy S8 bodes well for Samsung.
Looking deeper at consumers who indicated they have interest in the S8, the features that stand out most were the Infinity Display/Larger Screen (27%) and the eight-megapixel front facing camera (23%). Bixby, the more hyped feature of the S8, scored relatively low with only 13% of interested consumers saying it was the feature that interested them the most. That leads into an interesting finding we have on voice assistants.
Voice Assistants are not yet a Purchase Driver While the usage of voice assistants like Siri, Ok Google, Alexa, and Cortana have certainly been rising, they still have a long way to go to convince the market of their greater value. It may not be a surprise but voice assistants are not the main feature or reason anyone is buying a smartphone. The earlier points I made confirmed purchase drivers are still mainly the camera and the screen. We wanted to get a sense of which voice assistant US consumers feel is the best so we included a question in our study. We asked respondents which voice assistant they felt was the best. Below are the results.
First, Siri has the lead which speaks to a greater portion of US consumers having tried Siri compared to an alternative in order to form an opinion. Just looking at iPhone owners, the sentiment that Siri is the best jumps to 46.6%. Among Android owners, 36% said Google’s Assistant is the best. Interestingly, 11.9% of Android owners said they thought Siri was the best voice assistant while only 6.3% of iPhone owners said Google’s voice assistant was the best. But here is where I felt things got interesting.
This, like all the questions in our study, was not multiple choice. We asked consumers to choose the answer that best fit their opinion. We gave them a simple “none of the above” option and we gave them the chance to pick that they think “voice assistants are useless”. Surprisingly, 29.4% of respondents deliberately chose the option that they think voice assistants are useless. Consumers are a tough crowd, with a lot of convincing to do.
Lastly, we asked consumers what they thought of Bixby and whether they expected the new Samsung smart assistant to be better, worse, or the same. Interestingly, 13.2% of the respondents showed some confidence in Samsung and Bixby saying they think it will be better than Siri, Google Assistant, and Alexa. 38% said they felt it will be the same, while most consumers 43% said they don’t use any of the voice assistants so they have no opinion.
As we dug into this study, we uncovered more insights than I have time to share but the key here is Samsung still remains a solid brand despite the Note 7 issues. Consumers are still showing interest in Samsung’s latest products and the new innovations they are bringing to market. While voice assistants still have a lot of convincing to do in order to get consumers to trust them and use them more, there is enough potential here for Samsung to keep investing in Bixby since voice interfaces and voice assistants will become more valuable and desired features in the coming year.
I’ll have more to share on voice assistants and the voice UI soon as we are about to field our Voice Assistants 2.0 research study.
I have had a chance to work on speech and voice projects since I first interacted with Kaifu Lee at Apple who, in the early 1990s, was brought in to research voice and speech recognition for what would have been used in Apple’s Newton. Not long after it became clear Newton did not have any real legs, Microsoft lured him away from Apple to head up Microsoft’s first serious work on voice and speech recognition.
In the 25 years or so since that time, voice and speech recognition has evolved a great deal and is now used in all types of applications. With the addition of Artificial Intelligence applied to voice, Google, Apple, Microsoft, Amazon, and others, have now been pushing their voice solutions as a platform and new user interface that helps them interact with customers and provide new types of apps and services.
Recently, Amazon opened up the Alexa voice interface to hardware and software vendors to add a voice UI with direct links to Amazons’s apps and services. Apple’s Siri, Google’s Now and Microsoft’s Cortana are also used as voice UIs that work with third party products and are tied back to each company’s services or dedicated applications. In this sense, voice has become an important new platform for companies to innovate on and AI in voice is a viable platform to use when building new apps and services.
Although AI and voice as a platform will continue to be important, I sense a real shift — AR will soon become the most significant new platform for innovation relatively soon.
PokeMon Go introduced AR to a broad consumer audience and the tech world took note. Once they started to put their strategic thinking caps on, they immediately realized the idea of integrating virtual images, video, and information on top of real world settings has a lot of potential.
To date, most AR is in games like PokeMon Go and apps like SnapChat. But the idea of AR becoming an actual platform within an OS, which could drive a host of innovative apps and services, is just around the corner. The most likely platform for AR will develop on smartphones first and eventually extend to some type of glasses or goggles as an extension of the smartphone’s user interface. But, for the next few years, AR will be introduced and integrated into the smartphone experience and make it possible to blend virtual worlds into the real world.
Google already has an Android platform for AR called Tango and Lenovo has brought the first Tango phone to market. However, the Tango platform solution is half-baked and I am not clear how serious Google is about AR, given their first generation of AR smartphones on the market today. They still seem to be pushing harder into VR with Daydream and Tango seems to be more of an experiment. But that might change later this year if Apple comes out with their AR platform, something a lot of people believe Apple has up its sleeve with the next-gen iPhone. We should get an AR update from Google at their I/O developer conference next month.
Given the way Apple attacks markets with new software and uses it to sell new hardware, it makes me think Apple could actually be one of the companies that could bring AR to the mainstream market.
Here is the scenario I believe could evolve for Apple to make AR a household name.
First, I would expect Apple to add specific new hardware features to a next generation iPhone that could include extra cameras, incorporating a 360 degree feature, new types of proximity sensors, a new touch screen more sensitive for toggling between virtual and real worlds and perhaps new audio features such as some type of surround feature that could make a virtual scene come alive.
Second, they would create a dedicated AR software layer that sits on top of iOS that serves as an extended platform tied specifically to any new hardware-related features. That would be followed by a special SDK for developers who could create new and innovative apps for AR on a new iPhone.
If Apple does add AR to new iPhones, I suspect they would pre-seed five or six key developers with the AR SDK during the summer so when they launch the new iPhone in September, they can show off these apps along with the homegrown ones they would create themselves. This is pretty much the roadmap they follow when they introduce any new major device or significant new features for the iPad or iPhone and Apple following this plan is very likely should they use the new iPhone to introduce AR this year.
Given the secrecy of Apple, I doubt we will hear anything about AR at Apple’s Worldwide Developers conference in San Jose in early June.
But what is most important about this, should Apple enter the AR market, is the fact they would provide a powerful new AR platform developers can innovate around and serve as a vehicle to bring AR to the mainstream. This would throw down a major challenge to Google, Samsung, Microsoft, and Amazon to create their own AR platforms and this will become the next major platform gold rush that will drive new tech growth in the next three to four years.
The other company who could bring AR to the masses quickly is Facebook. At their F8 conference this week, Facebook showed off a new camera that will be at the heart of a new AR platform that can be used to add virtual objects to their app.
“Facebook is going to use the camera part of the Facebook app to build a new platform for augmented reality by implementing camera effects. Standard effects already used on other apps such as face masks, style transfers etc. will be available from the start. Users will be able to create their own since it will be an open platform. The new AR platform will be launched as open Beta today.
Facebook hopes to take further advantage of developing technologies such as Simultaneous Localisation and Mapping (SLAM) which allows the camera to plot out where an object is in the real world so AR can seems to be placed accurately in the ‘real world’. Additionally, Facebook is working on technology that allows the conversion 2D stills mages into 3D representations that can be modified with AR. The object recognition that will be introduced to the app means that the camera can ‘recognise’ the size, depth and location of the objct so the object can be manipulated within the AR space.”
The commonality of both Facebook and Apple is the development of an AR platform, an SDK, and the role software developers will play in creating innovative AR apps is what is important to understand. Although voice as a platform will continue to grow and be important, it is my sense AR is really the next major platform we will see the most innovation from in the near future.