As a market research analyst, I’m constantly searching for new data points when I read the news, talk with people, or walk the aisles at a brick and mortar store. This week, I noticed something interesting at Costco: There were three in-store displays of PCs designed specifically for gamers. There was a Lenovo Legion-branded notebook ($999), an ASUS Republic of Gamers notebook ($999), and an Acer Predator desktop ($1,299).
Based on my ongoing conversations with PC vendors, component companies, and other retailers, I knew gaming PCs had become a hot topic. But seeing three prominently displayed in Costco drove home the fact PCs designed specifically for gaming have officially moved from a large and very profitable niche to a serious mainstream business. (And that’s leaving aside the serious dollars associated with the rise of eSports, which merits its own future column.)
Incidentally, it’s worth noting many people incorrectly presume Costco shoppers are cheap. They’re not. They’re savvy shoppers willing to spend when a product is worth the money. And they know they can return items that disappoint them; another reason that gaming PCs showing up there is so interesting.
Serious About Play Major PC vendors have long coveted a slice of the gaming PC market, which is why Dell bought Alienware in 2006 and HP bought Voodoo that same year. And, despite the ongoing consolidation in the PC market, boutique gaming vendors such as CyberPower, Falcon Northwest, and iBuyPower are still going strong.
Why focus on gaming? Because in a market where margins are constantly under downward pressure, gamers are often willing (and able) to spend more to get the best. The fastest processor, the highest quality RAM, the speediest SSDs, and top-end graphics. And they want it wrapped in a slick chassis, with a high-quality display, keyboard, and input device. For many years, gamers insisted on desktops (often self-built), which let them swap out components to stay on top of the latest technology. Today, an increasing number are shifting to high-powered gaming notebooks.
Gaming is unique in that it is one of the last areas where high-performance PC components directly impact the quality of the consumer experience. In just about every other consumer-centric use case, the pain points are more likely the network than the hardware. But, when you buy a top-shelf gaming PC, you see a direct benefit in terms of frame rates and quality of play.
Spend More, More Often My IDC colleague Linn Huang recently completed a very interesting survey of U.S. consumers in which he asked many deep-dive questions about notebook and desktop past purchases, usage, lifetimes, and replacement plans. He also asked about gaming and captured some great data points from self-proclaimed gamers. Most notable: Respondents who self-identified as hardcore gamers on average spent about $875 for their current desktop or $776 for their current notebook. Self-identified gaming enthusiasts spent $810 and $735 respectively. Meanwhile, those who identified as casual gamers spent $698 and $590, while non-gamers said they spent an average of $669 and $660.
The fact gamers will spend more to buy a new notebook or desktop is reason enough for the PC industry to be paying close attention. But IDC’s survey also reflected another key element: They’re likely to buy new PCs more often, too.
When we asked respondents what typically triggers the need to buy a new notebook, 65% of non-gamers said they replace a notebook “when it wears down or breaks,” while just 15% of hardcore gamers chose that answer. The most common reason (24%) hardcore gamers said they replace their notebook? “I replace my notebook when a new technology comes out that warrants an upgrade.”
That’s the kind of customer any business wants. And today’s gamers are leading the charge in a new area that also requires high compute: virtual reality. This year, we’ll see vendors ship new VR products designed to drive a good experience using a less powerful PC. But for the foreseeable future, I expect the very best VR experience to occur using a high-end gaming PC. I’ve been using Dell’s Alienware 13 to test the HTC Vive VR rig, and it drives a great VR experience. Starting price for the notebook: $2,049.
The final reason PC companies are so keen to grab a portion of the lucrative and growing gaming PC market? It’s an area where Apple continues to decline to participate and there’s no reason to believe that will change any time soon.
This week saw a furor over Unroll.me, a service which offers to unsubscribe users from unwanted emails but which apparently sold user data to Uber in the past in a way that wasn’t transparent to users. The reaction to the revelations was predictable: some decried all ad-based business models using cliches like, “if you’re not paying, you’re the product”, while others said users were naive for imagining a free service wasn’t monetizing their data in some way. Every time I see this happen, I wish we could get beyond the simplistic painting of all ad-based services with the same brush and have a more nuanced conversation about ad-based business models.
I wrote a piece for Techpinions almost three years ago about business models and it’s worth referring back to it. In that piece, I talked about three broad categories of business models and the implications each has for what I called user/customer alignment. What I meant was that, under some business models, users and customers are the same people. Under others, the paying customers and the users are actually different sets of people. When the latter happens, there can sometimes be tensions between the needs of those two sets of customers over privacy in particular but also over other issues. That’s particularly the case for ad-based business models which rely on learning as much as possible about users in order to better serve advertisers.
That’s a tension many users are willing to live with in return for what’s usually a free or heavily discounted service. Google’s seven billion user products (Gmail, Android, Chrome, Maps, Search, Youtube and the Google Play Store) all. to some extent. rely on capturing user data to drive its ad business. But none of them would have a billion users unless those users found some value in the service and were willing to make some tradeoffs in terms of being tracked and shown ads. There’s a reasonable argument to be made that not all users understand those tradeoffs fully, but our recent privacy surveys (covered here and here) suggest most users actually do have a decent understanding and are willing to make these tradeoffs anyway, while a minority eschew these services because they’re not willing to do so.
Misunderstandings over Data and Ad Businesses
Importantly, though, ad-based businesses almost never sell user-identifiable data to third parties. That’s not their business model and it would be counterproductive. Instead, they typically either aggregate or anonymize that data before selling it or don’t sell it at all but rather simply use it to target advertising. Even Unroll.me wasn’t selling identifiable user data because Uber only wanted to know how many people were using Lyft, not which individuals were. It still breached users’ trust by looking into the content of emails in a way users didn’t know it would but that’s technically a different issue.
The recent blowup over ISP privacy regulations also led to some comically bad misrepresentations of what ISPs might do with users’ data, with one prominent individual offering to buy individual Senators’ browsing history, as if such a thing were possible (it isn’t). But that doesn’t stop people from ignorantly or deliberately misrepresenting what’s happening with ad- and data-based business models.
Another aspect of ad-based business models we’ve seen in recent months is actually yet a different form of tension. This time, not between the end users and advertisers, but between creators and advertisers. We’ve seen that tension in the boycott of YouTube and Google over ads appearing next to problematic content. In attempting to resolve these conflicts, Google has repeatedly sided with advertisers over creators in tightening standards for where ads can appear, both on YouTube and in its AdSense program, all of which has affected even legitimate creators’ ability to monetize their content.
The desire to sell advertising can therefore sometimes lead ad-based businesses to put users and creators of content second. But, whereas users have few alternatives to YouTube — by far the biggest online video site in the world — creators are starting to find alternatives for monetizing online video. But those alternatives are mostly other big ad-based businesses like Facebook, so the cycle is likely to continue to some extent.
Direct Monetization Solves Most of these Issues
The other two business models I mentioned in my original piece were direct business models – where the company sells a product directly to end users – and platform business models under which the company sells third party products and services to end users. Both of these have better user/customer alignment, with direct business models having 100% alignment between those two groups. Platform business models can still create some tensions, typically between the platform owner and the content owners over the revenue share or cut the platform takes of gross revenue. But the direct business model solves most of these tensions by making the value proposition to the user simple: buy a product or don’t.
This straightforwardness makes direct business models attractive to many – you know what you’re getting and you choose, at every step of the way, whether you want to continue to pay for the privilege. But it may mean paying more for the product in some cases because it’s not being monetized in other ways, although that’s again a tradeoff many customers are willing to make. On the other hand, some businesses try to mix the two, sometimes with bad results – Google’s recent introduction of paid promotion on its Google Home device is an example. When people pay for a hardware product like this and there’s no mention of advertising at the point of sale, it feels like much more of a betrayal when it does show up because it wasn’t part of the bargain.
The Price/Tension Equation is Key
That price/tension equation is key to the fight over the future of consumer technology. Of the biggest tech companies, some are choosing to go down the direct business model path, with Apple, for example, largely abandoning advertising as a business model across its products in the last year or two, while others, like Google and Facebook, continue to derive the vast majority of their revenue from ad-based models. Each will find an audience willing to make the tradeoffs inherent in their business model, whether sacrificing some privacy for a low price or paying a premium to avoid making that sacrifice. But I expect we’ll see many more examples of the tensions inherent in ad-based business models as the consumer technology industry expands into markets where many don’t have the means to pay the privacy premium.
In the late 1990s, I had the privilege of serving as an advisory board member to Xerox Parc’s Venture arm. Our charter at the time was to go into Xerox Parc and look at what their many scientists were creating and see if they had any potential for commercial applications. This was in the early days of the internet and Xerox Parc had been developing both new software and hardware technologies the parent company wanted to try and either license or sell to other companies.
Last week, Amazon was awarded a patent for an on-demand manufacturing system designed to quickly produce clothing and other products — linen and curtains and such — only after they have been ordered. Amazon applied for the patent in late 2015 and, since then, they has been growing their fashion inventory as well as its own clothing brands. According to a Bloomberg report published in September 2016, Amazon was named the biggest online clothing seller. Amazon got to that position by adding items directly proportional to the confidence consumers had in buying online. Starting out with shoes (easy to size) and T-shirts (a relatively modest investment and also easy to size), Amazon grew its range, building from basic items to fashion powerhouse names such as Kate Spade, Vince, Ted Baker, and Michael Kors, just to name a few.
According to a recent report on commerce by GWI, 20% of online consumers in the US bought clothes online in the last quarter of 2016. Another 14% bought shoes. If you don’t think that’s significant, what if I told you that only 14% of consumers bought online the item that “killed” brick and mortar stores: books.
Consumers are becoming more comfortable with buying clothes, shoes, and accessories online but new ways of selling and new technologies can push this market even further by making the whole experience more personal.
Fashion as a Service
Subscription services in shopping have been growing in popularity over the past few years. What in most cases started with organic fruit and vegetables, soon developed to include razors, toothbrushes, dog treats, toys and, more recently, fashion items. Several companies deliver shirts and lingerie on a monthly or quarterly basis to happy but busy customers who like the consistency of a brand they love being delivered to them.
But the model is changing. While Uber and Lyft are getting all the publicity for revolutionizing transport and possibly drive – no pun intended – consumers away from owning cars to simply ordering a car, fashion has also been moving to a more hybrid subscription rental service. Le Tote is a good example of a successful service. They deliver a tote with items based on style and fit as well as personal preferences. You wear anything in your tote for as long as you want, then send it back when you are done, ready for a new order. If there is something you like, you can keep it and buy at a discounted price.
The ability to change your wardrobe collection often with trendy clothes that fit your lifestyle needs coupled with the convenience of delivery is certainly something busy women, or women who do not enjoy the shopping, experience can appreciate. Adding further customization to the fit of the clothes would drive more people to try this kind of service and is where new technologies such as AR and connected sensors can play a role.
Visual Computing and the Buying Experience
With Augmented Reality and Virtual Reality coming to our phones and PCs, we see the potential for shopping experiences to be redefined. For example, being able to see on your walls how a color you picked will go with your furniture, size a new sofa in your family room or try your new car on for size without having to go to a dealership, is becoming a reality thanks to VR.
The possibilities are endless and fashion can benefit from this too. Already today there are apps that allow you to try an item on, such as glasses or a hat, via a picture of you. There have also been services that will ask you questions about your size, weight, ethnicity, pants, and collar sizes then offer what they claim is the closest thing to a tailored garment. Some use a combination of the two methods and marry your inputted information with your picture to come up with a custom solution. Custom clothing company MTailor takes it a step further and offers an app that can measure you with the camera on your phone and deliver custom shirts, suits, and jeans.
These solutions have been relying on 2D pictures and inputted info which have plenty of room for error. With smart fabric and sensors being added to clothing, there are more options now to properly measure size and use that information to find the right clothing. LikeAGlove started a couple of years ago to use leggings to measure your shape and then transfer the data to an app. Aimed at people who are on a fitness program to lose weight, they claim to better measure your progress compared to a scale that would not help you measure how your body shape changes as you lose the pounds. The app also offers help in finding the jeans brands and models that best fit your shape.
If you combined sensors for shape tracking and AR, you could see how certain designs would look on you and then have them tailored to your shape then custom-made and delivered. Amazon announced today Echo Look, an Alexa-enabled camera that lets you take pictures and short videos using built-in LED lighting and a depth-sensing camera with computer vision-based background blur. Echo Look will let you see yourself from every angle and offer a second opinion, thanks to AI, on which outfit is best as well as suggest brands and items based on the images you collect in your style book.
Bots and Digital Assistants as Stylists
With so many businesses focusing on bots and big ecosystem players focusing on Digital Assistants, I would expect both will be able to serve my needs when it comes to shopping for clothes and accessories. Store-dedicated bots could help navigate through the latest collections or cross-store bots could fetch the item I want/need at the best price and delivery option. Offering a personal shopper that has information about your tastes, as well as look and size, could be a differentiation customers are either prepared to pay for or see as an added benefit in an all-included service. The focus here would be more on an actual shopping experience rather than on tailored clothing for those consumers who do enjoy shopping online and like to do so efficiently but, most importantly, they want to know they bought what best fits their needs.
For a more customized experience that shifts from a personal shopper to a “lady in waiting”, think how great it would be if my assistant could suggest my daily outfit based on weather and the appointments on my calendar. That would be the perfect solution to busy people who do not want to default to having to wear a gray t-shirt every day.
There is no question technology will continue to change the way I shop for clothes. What I want is for tech to help me find what I need, what fits, and what is best priced, all nicely wrapped up in a box, delivered to my door. Tech might still fail to make me a fashionista but it would have succeeded in making me a very happy shopper.
Of all the futuristic technologies that seem closer to becoming mainstream each day, robotics is the one that is likely to elicit both the strongest and widest range of reactions. It’s not terribly surprising if you really think about it. After all, robots in various forms offer the potential for both the most glorious beneficence and the most insidious evil. From performing superhuman feats to the complete destruction of the human race, it’s hard to imagine a technology that could have a more wide-ranging impact.
Of course, the practical reality of today’s robots is far from either of these extremes. Instead, they’re primarily focused on freeing our lives and our businesses of the drudgery of mundane tasks. Whether it’s automatically sweeping our floors or rapidly piecing together elements on an assembly line, the robots of today are laser-focused on the practical. Still, whenever most people think about robots in any form, I’m guessing visions of dystopian robot futures silently lurk in the back of their minds–whether people want to admit it or not.
We can’t help it, really. We have all been exposed to so many types of robotic visions in our various forms of entertainment for so long that it’s hard to imagine not being at least somewhat affected. Whether through the pioneering science fiction novels of Isaac Asimov, the giddy futurism of the Jetsons cartoons, the hellish destruction of the Terminator movies, or countless other examples, we all come to the concept of robotics with preconceived notions. Much more than with any other technology, it’s very difficult to approach robotics objectively.
Now that we’re starting to see some more interesting new advances in robotics-driven services—such as food and package delivery and, eventually, autonomous cars—the question becomes how will those loaded expectations impact our view and acceptance of these new offerings. At a simplistic level, it’s easy to say—and likely true—that we can accept these basic capabilities for what they are: minor conveniences. No need to worry about robotic delivery carts causing much more damage than scaring a few pets, after all.
In fact, initially, there is likely to be a “cool” factor of having something done by a robot. Just as with other new technologies, it may not even matter if it’s the best or most efficient way of achieving a particular task: the novelty will be considered a value unto itself. Eventually, though, we’ll likely start to turn a more critical eye to these capabilities, and only those that can offer some kind of lasting value will succeed.
But the real challenge will come when we start to combine robotics with Artificial Intelligence (AI) and deep learning. That’s where things can (and likely will) start to get both really exciting and really scary. The irony is that to achieve the kind of “Asimovian” robotic benevolence that our most positive views of the technology bring to mind—whether that be robotic surgery, butler-like personal assistant services, or other dramatical beneficial capabilities—the machines are going to have to get smarter and more capable.
However, we’ve also seen how that movie ends—not well. Though admittedly a bit irrational, there’s no shaking the fear that we’re rapidly approaching a point in the evolution of technology—driven by this inevitable blending of robotics and software-driven machine learning—where some really big societal-impacting trends could start to develop. We won’t really be able to recognize them for some time, but it does feel like we’re on the cusp.
Of course, there is also the potential for some incredibly positive developments. Removing people from dangerous conditions, helping extend our ability to further explore both our world and our universe, letting people focus on the things that really matter to them, instead of things they have to do. As we move forward with robotics-driven technological advances and transition from science fiction to reality, the possibilities are indeed endless.
We should be ever mindful, however, of just how far we are willing to go.
At this point, the rumor mill surrounding Apple’s next iPhones, expected to be released in the fall, is well underway. There’s some consensus emerging around what we’ll see, at least in broad brush terms, but lots of details are still murky. Given what we seem to know at this point, I think there are a few big dilemmas Apple faces with regard to the positioning of the new phones.
There was an interesting article in the Atlantic that dove deep on how online shopping is causing such turmoil for brick and mortar retailers. It’s a good, long read. A paragraph stood out to me as the key to this story.
The last few years have seen a fascinating shift in storylines as well as data around the storylines. Many of us who research consumer trends in the industry focus quite a bit on the endpoint because they serve as gateways to broader software and services experiences. For this reason, our eyes have been squarely on studying what people do on smartphones, PCs, and tablets. Since 2010, when the iPad hit the scene, the role of the PC has come under great scrutiny. Is it a dying form factor? Is it something consumers no longer need? Is the smartphone the only device humans will use someday? Will the tablet kill the PC? These questions, and many more, have been a focal point in the consumer hardware discussion.
The debate is relevant because it informs businesses on where to focus their resources. It is abundantly clear the smartphone is the central and primary computing device for billions of people. Knowing this means any business should no doubt employ a mobile-first strategy with their software and services. Mobile-first simply means to assume the smartphone is the primary engagement point with your product. Of course, this will vary by the type of application. Something like Netflix for example, is primarily consumed on larger screen devices like PCs, TVs, and tablets. Microsoft Office and other enterprise or commercial applications are primarily used on PCs and Macs. In all these cases, where the application and workflow are better on larger screens, they still have a complimentary mobile experience. We live in a multi-device world where most humans in developed markets like the US, Europe, China, etc., use both a PC and a smartphone for varying things throughout the day. But, because the smartphone is the computer we have with us at all times, it is crucial for even PC-first applications to have complimentary experiences on the smartphone.
But, when it comes to consumer software and services, the strategy gets flipped. Mobile-first, or mobile-only, has been the mantra for developers and consumer software strategists for the last few years. But I’d like to argue that even many of these mobile-only apps or solutions can benefit from a complimentary PC experience as well.
Interestingly, global data tells us the PC is still used heavily on a daily basis across nearly all demographics.
As you can see, the amount of time spent per day on PCs is still significant. Our estimates are that ~1.3 billion people personally own a PC, compared to the nearly three billlion people who own a smartphone. The global average of time spent using a PC each day by those ~1.3 billion people is 3.54 hours per day. What became clear a few years ago was the smartphone was not necessarily taking time usage time away from the PC but was adding to the total time spent using devices and being on the internet each day by its owners. Looking back through years of data, daily time spent using a PC has stayed roughly flat while daily time using a smartphone has grown dramtically. People seem to be using both devices independently and in tandem to browse the web more, communicate more, play games more, watch videos more, be on social media more, shop more, etc. It is also important to note that globally, millennials still spend a lot of time on their PCs as well. The fallacy is to think the only way to reach millennials is with a mobile app. While a mobile app is the primary way to reach millennials, the data suggests it would be a mistake to not also offer them some way to engage with your software or services while they are at their PC as well.
The PC is still an important engagement point even in the mobile-first era. However, the strategy for bringing mobile experiences to the PC needs to understand and utilize the device’s benefits. The worst thing any developer or business can do is just duplicate their mobile strategy for the PC. These hard lessons were learned when many apps and services failed because they just duplicated their desktop experiences on mobile and did not take advantage of the smartphones unique advantages.
If you agree with my logic, the debate will turn to whether just make a website or make an app. To me, the path is clear — make an app. Both Windows and Apple offer app stores and, in many cases, the ideas I’ll share make more sense as an app rather than a browser experience. Take Twitter for example. Twitter is a mobile-first experience and a primary engagement point. Yet, the website and their own desktop client app are pretty poor in comparison to other client side apps for macOS and Windows 10. I’d argue Twitter is losing a significant engagement point on the PC, given how much time people spend browsing the web for news and entertainment while on their PCs. Thinking of millennials, Snapchat is another example that comes to mind. We know millennials spend a lot of time on their PCs and millennials with Macs engage quite heavily with iMessage on their Mac. The value of being able to text and message friends from the device you are in front of, in this case the PC, makes a lot of sense. Snapchat’s chat app is the sticky point for many millennials. I can argue even if Snapchat brought their chat client to the desktop it would make a lot of sense. The counter-argument is to say it isn’t that hard to pick up your smartphone and open the app and do what you want to do. However, having observed a range of consumers who have both desktop and mobile apps of the same software, there is no arguing that being able to do what you want or need to do on the device you are using is far superior. While it seems easy enough to just pick up your smartphone to use an app you don’t have on your desktop, it misses the reality of the increased friction in that experience. I use Slack for example for a wide variety of work and personal things and if Slack was not available on the desktop I would not use it nearly as much as I do.
I can see many cases where Instagram could benefit from a smart desktop app. Maybe Facebook could as well or, at least, bring Facebook Messenger to the desktop as an app. Most companies want to just offer a browser-based PC experience but, in that scenario, your experience just gets buried in the many number of tabs consumers have open at any given time. Don’t make your PC experience just a tab in a browser — it will get lost. Apps offer rich notifications and a more visual experience. For this reason, I think the best strategy to re-engage with your customers on the PC is via an app, not a website.
Being mobile-first is the right strategy. Prioritize the mobile experience when you know that is the primary way your customers will engage. Just don’t forget your customers also spend many hours per day in front of their PCs and, in some cases, it is wise to think about how best to offer a complimentary PC experience in the hope you can increase your total engagement time with your customers.
In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the wide range of developments from this week’s Facebook F8 Conference, as well as rumors that Apple may be developing a tool for monitoring diabetes.
This week, AMD released the latest in its family of Zen processors, the Ryzen 5 series. Targeted at DIY consumers and OEMs with retail prices ranging from $169 to $249, Ryzen 5 can address a much wider segment of the market than the Ryzen 7 processors launched last month that are priced as high as $499. The competing Intel processors in the Core i5 family sit in essentially the same price segment of the market but AMD Ryzen has a significant advantage in thread count with all released parts enabling multi-threading. Though Zen is at a deficit in per-clock performance compared to Intel’s Kaby Lake, a 2-3x improvement in threading capability offers substantial headroom for application performance.
Platform Value Intel has had years of consumer mind share and channel market share in this segment without competition and AMD understands it needs to do more than just equalize metrics to make any significant market share moves. On top of the thread and core count advantages Ryzen 5 offers over Core i5, the chipset and motherboards based on the AMD B350 chipset offer value-adds. The B350 chipset includes the ability to overclock both the CPU clock speed and memory for the Ryzen platform, all while adding support for interface technologies like M.2 NVMe SSDs and USB 3.1 connectivity. Intel’s competitive solution for low-cost motherboards is the B250 chipset but it locks consumers out from overclocking of any kind.
It’s good AMD they decided to make the decision to allow for overclocking the B350 chipset. Testing has proven that increased DDR4 memory speeds can have a dramatic impact on the performance of some applications, especially games. Given the controversy surround the Ryzen 7 processor and gaming, any avenue AMD can offer to improve this area is welcome.
Consumer Performance Direct performance comparisons of Ryzen to Core start with the Ryzen 5 1600X and the Core i5-7600K. Having 6 cores and 12 threads on the 1600X gives AMD performance leads over the 7600K (4 cores and 4 threads) we haven’t seen in nearly a decade when the Athlon first hit the market. Applications like Blender (used for 3D rendering) and Handbrake (for media creation and transcoding) show the power multi-threaded workloads can tap into on a Ryzen CPU. Even the 4 core, 8 thread Ryzen 5 1500X (priced $60 lower than the 1600X) is able to outpace the Intel CPUs in this segment.
Single threaded performance still belongs to Intel and its Kaby Lake architecture. Synthetics and a few applications like Audacity audio encoding bear this out and, though there aren’t many benchmarks that make the case, real-world experience and user interfaces are very often single thread limited.
One of the Achille’s heels of AMD’s initial Ryzen 7 processor launch centered on PC gaming at lower resolutions like 1080p. The story remains mostly the same for Ryzen 5 where the Core i5-7600K demonstrates better performance in most of our testing. In a few cases, particularly with “Ashes of the Singularity” and “Hitman”, the Ryzen 5 1600X is able to hold its own, matching the results from Intel. AMD was able to show the potential benefits of optimizing game engines for Ryzen through the Ashes developers, netting a 31% overall improvement at peak. The difficulty for AMD will be getting a wide array of game developers and engine developers to do the same and spend time and money to make the changes necessary for more highly threaded processors.
Intel Reaction Intel, for its part, has remained publicly silent about the moves AMD is making with Ryzen. Many in the industry and DIY community have accused Intel of sitting on the market, unmoved to improve performance in the areas important to them without competition to push them down the path. The validity of that opinion is tempered by knowing Intel has focused most of its resources on the mobile markets (both smartphone/tablet and notebooks). Both process technology innovations and architectural shifts on Intel processors have been built to lower power consumption and improve instantaneous performance.
There is some buzz that Intel might be moving up the roadmap for forthcoming refresh processors in the desktop space to address the competition. I do not expect Intel to adjust current pricing of Core i5 or Core i7 processors in response to AMD but I do see Intel making specification and price adjusts with the next-generation processors to accommodate the evolution Ryzen has brought to the market. Expect more cores, more threads, and lower prices from Intel.
AMD has been able to deliver on its promise of a competitive consumer processor with both Ryzen 7 and Ryzen 5. Though it suffers from a potential pitfall with gaming performance currently, in any multi-threaded workload, Ryzen 5 stands out from Core i5 and does so in a dominating fashion. As the consumer software space continues to adapt to multitasking and highly threaded application workloads (AI, computer vision), AMD will continue to have the advantage.
In June, we will commemorate the 10th anniversary of the release of the iPhone. In recognition of this signature date, there’s more than the average amount of speculation on what the 2017 edition of the iPhone will sport and hope it might revitalize the smartphone sector, which is experiencing somewhat of a slowdown.
I have no doubt the iPhone 8, X, or whatever it might be called, will be terrific – as nearly all high-end phones are today. Samsung, with its launch of the Galaxy S8 line last week, pushed the envelope even further, particularly with respect to screen size/display, and innovative features such as DeX.
But what has historically given Apple that cachet and ability to charge a premium for its products is the “ecosystem”. When at the top of its game, Apple’s hardware, software, apps, and media all work magically and seamlessly together. However, even more than the commoditization of the smartphone category, there has been a slow and steady erosion of the vaunted ‘Apple Experience’. This mainly has to do with Apple’s software and services, where the company has lost some of its edge. iTunes, which is now 16 years old, has become bloated– more of a turn-off than a turn-on. Apple’s signature applications such as e-mail/contacts/calendar, photos, music, and TV are all OK, but they’re not great. iCloud has not completely fulfilled its mission and an increasing number of Apple users see the whole iTunes/iCloud/Music blend as sort of a hot mess.
All the while, Google has steadily gained. I’d argue devices and software in the Google/Android/Chrome world now work and sync more seamlessly than in the Apple/iOS/macOS world. Amazon has become the high beta company in tech, with keen innovations and successful products in hardware and software, while exploring new frontiers in areas such as AI. And Microsoft has staged a comeback of sorts, with successful transitions in cloud and a better reimagining of the ‘post-PC’ world, even without a smartphone product.
Apple’s recent hires and actions signal a new recognition and urgency. The company hired Shiva Rajaraman from Spotify to help reshape the music and video experience, new Apple TV executive Timothy D. Twerdahl was hired away from Amazon, and it appears the Mac Pro and iMac line will be getting more love. Reshaping the software and services experience seems to have become a priority.
So, what would a reimagined Apple experience look like? I suggest five pillars:
1. Revamp or Ditch iTunes. This product has had pile after pile of updates and refreshes but seems outdated and disjointed from Apple’s music, video, TV, and photo offerings. What, really, is the role of iTunes in a world of App Store, Apple Music, Apple TV, and iCloud? It should be renamed since today it’s mostly a store and ‘control center’ for settings and management of multiple devices (though some of that has been subsumed by iCloud). The user interface needs to be re-imagined and navigation/synchronization made simpler and more intuitive.
2. Improve iCloud. I feel like iCloud has changed from something as the place all content is shared and safely stored to something that must be managed and is needlessly complex. Many consumers still aren’t fully comfortable with ‘cloud everything’ and how content moves on and off the device. Apple isn’t doing itself any favors here. Example: when you enable ‘family sharing’ for music, you are then told to “delete” your music and then “turn on iCloud” which will ‘restore’ your content. For any consumer who, at some point has lost a hard drive, failed to do a backup, or somehow hasn’t gotten this cloud thing right (i.e. most of us), this is a moment fraught with anxiety.
3. Determine What’s Next with Mail, Contacts, Calendar. These are signature productivity apps but Apple’s versions now seem more workmanlike. Is there something here that could revitalize the category and ‘delight’ rather than merely ‘satisfy’? Despite all the messaging alternatives, it still looks like email is here to stay.
4. Continue to Invest in the PC. Stagnant tablet sales, innovative new combo products on the Windows side, and growing success of Chrombeooks show the ‘post-PC’ world has not evolved in quite the way the late Steve Jobs imagined. The PC will still be the anchor productivity device for the foreseeable future, as shown in a recent survey by Creative Strategies, Inc on Millennials’ device preferences. Apple has work to do in figuring out how the PC and macOS fits into its world going forward. I’ll also go out on a limb and argue this is one category where Apple should consider relinquishing its insistence on having premium products at super-premium prices. One, because in the current product line, it’s not justified. And two, because they don’t want to cede the entire under-30 generation to other platforms. It might not be such a bad idea to have a solid but more affordable Mac product to keep folks fully bought into the Apple ecosystem.
5. Regain the Service Halo. This is harder to quantify but my sense is Apple’s size, and intense pressure to grow, has created the perception the company tries to extract one’s dollar at nearly every opportunity. There was a time when you could get customer service help on the phone without having Apple Care (if you asked nicely). Or, if you brought in a cracked screen a month after you bought the latest iPhone, a ponytailed Apple Store employee would wink and hand you a new one, no questions asked. You felt like Apple had your back, in a way that felt different than other companies and justified, in part, the premium price for their products.
Ten years after the launch of the iPhone, the core of Apple is still very much there. But Silicon Valley’s other biggies – Google, Microsoft, Amazon, Facebook, and Netflix – are all now more significant forces in software, content, and services, making it more challenging for Apple to be in a class by itself as it was for a few years. Which makes me hope that Apple’s tenth anniversary iPhone is about more than just the phone.
I’m writing this column on a plane on my way home from attending Facebook’s F8 developer conference. More than any other developer conference I attend, Facebook’s is a crazy mix of near-term feature upgrades across its growing portfolio and out-there R&D work which won’t deliver real-world results for years to come. It also highlighted something of a chasm in Facebook’s innovation strategy, with its near-term focus on cloning competitors’ apps and features on the one hand, and mind-blowing research on the other. What Facebook needs, more than anything else right now, is to take the kind of thinking that’s driving its ten-year roadmap and put it on a shorter-term timeframe.
An Event of Two Halves
English soccer commentators are fond of referring to the sport as a game of two halves, meaning the two periods in the game can turn out completely differently and what happens in the first may be a poor predictor of what happens in the second. Facebook’s F8 was very much an event of two halves, with its two keynotes very different in their focus and tone.
Day 1 – Innovation by Proxy
Tuesday’s kickoff was dominated by here-and-now announcements about products Facebook and its developers are working on today. The first part was about all the ways Facebook has made cameras central to its apps in recent months and how it’s now going to evolve those cameras with an AR platform called Camera Effects. It went on to cover social VR and the Facebook Spaces app that’s launching for Oculus. It then ended with a discussion of how its Messenger Platform is evolving from last year’s somewhat misguided launch of bots.
All of this was about products consumers can use and developers can build for either today or in the very near future and much of it felt like stuff we’ve seen before, with minor tweaks. The AR platform is very reminiscent of Snapchat’s filters products, although opening it to developers rather than merely advertisers is a new twist. Facebook Spaces is an evolution of what was shown on stage at last year’s event and mimics other social VR products we’ve seen from smaller companies in the past. And Messenger’s second attempt at a platform feels a lot like some of the Asian messaging apps that have long done well in this space and, as such, is a lot less original.
It was easy, therefore, to come away from the day one keynote feeling Facebook has forgotten how to innovate, how to create truly new experiences and ideas and, ultimately, how to move its products forward without relying on features invented elsewhere. Granted, none of what was announced was bad. I think the AR features will be very popular if they live up to the concepts Facebook demoed on stage, the new version of the Messenger Platform feels much more focused and realistic in its aspirations, and Spaces is a decent proof of concept even if not yet a compelling social VR experience. Indeed, because so many of the ideas presented have been successful elsewhere, it’s easy to imagine them being that much more so with Facebook’s massive audience and network.
Day 2 – Mind-Blowing Ideas and Ambition
By contrast, then, the second day’s keynote was full of long-term thinking, massive ambition, and out-there ideas. I think the most frequent set of words mentioned by the various presenters was probably “years away” or words to that effect. Zuckerberg touched on the company’s ten-year roadmap – unveiled last year – during his slot on day one, and much of the day two stuff belongs late in the second half of that roadmap. Some of it may never even see the light of day.
But what characterized day two’s keynote announcements and discussions was their sheer difference from what’s been done before. While other big tech companies focus on evolving current user interfaces with combinations of touch, voice, and mixed reality, Facebook is dabbling in brain-computer interfaces, communication via neurons and skin sensors, rethinking communication networks, and more. If day one was all rather familiar, day two was familiar only in the sense we’ve seen some of this stuff in science fiction movies.
The creativity and imagination on display on the second day made lots of think pieces published Tuesday night and Wednesday morning about Facebook’s lack of innovation seem silly. Headlines later in the day on Wednesday gaped at Facebook’s ambitions to connect to your brain and talk through your skin. The contrast between the reactions to day one and day two is stark.
Bridging the Chasm
What we have, then, is a chasm between Facebook’s seeming inability to be imaginative in the short term and an abundance of creativity in its long-term thinking. What happens between the audaciousness of the company’s ten-year thinking and the reality of what gets released tomorrow that makes the here and now so much less interesting? Why does Facebook seem unable to innovate in such impressive ways in the short term when it’s clearly capable of that kind of imagination when freed from time constraints?
I suspect two things are going on. First, Facebook’s efforts here and now are constrained not just by time but by its current strategic and tactical priorities. Yes, it might like to do lots of things but, in the present, it’s competing with Snapchat, Twitter, Google, and others for users’ time and advertisers’ dollars and that drives certain imperatives, such as trying to win share of time back from interlopers, maximizing ad inventory, driving new revenue streams, and so on. Those prosaic short-term objectives drive tactical actions like cloning Snapchat features, pushing ads into new places across Facebook’s family of apps, and trying to tie together disparate parts of the business like social networking and VR.
But I don’t think that’s the whole problem. The other half of the problem is Facebook is now operating at such a massive scale and has had so many bad experiences in the past with big changes, it’s actually a little scared to innovate in big ways. When you have two billion users across all the countries in the world and dozens of languages, any small change is that much harder. That hasn’t stopped Facebook from shoehorning new features into the interface and I wrote recently about how Facebook has pushed some things too hard in ways that were user hostile, but those changes have again mostly been the unimaginative cloning ones rather than true innovations. Facebook seems to have lost some of its daring in moving its products forward, which is just the kind of “Day 2” thinking Jeff Bezos said he wanted to avoid in his recent Amazon shareholder letter.
What Facebook needs, then, is to allow some of the creativity and ambition that infuses its long-term R&D efforts to bleed back into its shorter-term product roadmap. To give its employees freedom to innovate in more dramatic ways and serve, not just today’s tactical priorities, but longer-term strategic ones too. And to start really inventing things here and now in real products and not just R&D projects with ten-year time horizons. Moonshots are great for burnishing a company’s innovation credentials but if that innovation is absent from the short-term product roadmap, it starts to look like the moonshot factory is not just in its own building but almost a separate entity entirely. That was the impression I was left with at the end of this year’s F8.
With the upcoming availability of the Samsung Galaxy S8, we were curious what consumers thought of the device and how interested they are in purchasing one. We teamed up with SurveyMonkey Audience to do some research on US consumers to better understand their interest level of the phone and its newest features. We also explored whether the Galaxy Note 7 battery issues were a factor in consumer interest and we threw some questions in around voice assistants for good measure. In all, we surveyed 923 consumers. These are the key findings.
Note 7 Battery Impact In our annual fall smartphone study, we explored the issues surrounding the Note 7 and whether or not the media coverage and awareness of the battery problems led to a large amount of negative sentiment. In that study in the of fall 2016, we learned most consumers (62% to be exact) did not see the Note 7 battery fires as a deterrent to purchasing a Samsung smartphone in the future. It was even higher looking at existing Samsung smartphones owners where 73% said the Note 7 issues would not deter them from purchasing a Samsung smartphone in the future. Knowing Samsung customers are a loyal bunch, we feel both those percentages are good news for Samsung.
In this most recent survey, we found similar results. This study revealed 53% of consumers said the Note 7 issue has not impacted their interest in the Galaxy S8, while 17.7% said they were not sure or undecided. Only 28% said definitively the Note 7 battery problems negatively impacted their interest in the Galaxy S8. Again, knowing Samsung owners are a loyal bunch and are the most likely candidates to buy a S8, only 16% of existing Samsung smartphone owners said the Note 7 problems are impacting their interest in the new device.
Overall, I’m confident the data we have from the fall, and this most recent data, suggests the Note 7 fires were never a big roadblock for consumers to begin with and even less so now. This should alleviate any concern over the Note 7 fallout impacting the sales of any Samsung smartphones released this year.
Interest in the Galaxy S8 Overall, interest in the new S8 seems low. However, I expect Samsung to begin their marketing blitz and carriers to start heavily advertising the S8 in the coming weeks and months which will help with interest over time. The more important breakdown to this question is to look at interest in the S8 by existing smartphone owners and those looking to upgrade in the next three to six months.
Interest remains highest among existing Samsung smartphone owners than any other group of consumers. More importantly, drilling down on folks who expect to upgrade their smartphone in the next 3-6 months, 36% of upgraders in that time frame are interested in the Galaxy S8. Interestingly, 21.7% of upgraders in that time frame stated they were extremely interested. These are consumers looking to upgrade sooner rather than later and are not interested in waiting until the fall to upgrade. Again, the fact 36% of consumers are looking to upgrade are interested in the new Galaxy S8 bodes well for Samsung.
Looking deeper at consumers who indicated they have interest in the S8, the features that stand out most were the Infinity Display/Larger Screen (27%) and the eight-megapixel front facing camera (23%). Bixby, the more hyped feature of the S8, scored relatively low with only 13% of interested consumers saying it was the feature that interested them the most. That leads into an interesting finding we have on voice assistants.
Voice Assistants are not yet a Purchase Driver While the usage of voice assistants like Siri, Ok Google, Alexa, and Cortana have certainly been rising, they still have a long way to go to convince the market of their greater value. It may not be a surprise but voice assistants are not the main feature or reason anyone is buying a smartphone. The earlier points I made confirmed purchase drivers are still mainly the camera and the screen. We wanted to get a sense of which voice assistant US consumers feel is the best so we included a question in our study. We asked respondents which voice assistant they felt was the best. Below are the results.
First, Siri has the lead which speaks to a greater portion of US consumers having tried Siri compared to an alternative in order to form an opinion. Just looking at iPhone owners, the sentiment that Siri is the best jumps to 46.6%. Among Android owners, 36% said Google’s Assistant is the best. Interestingly, 11.9% of Android owners said they thought Siri was the best voice assistant while only 6.3% of iPhone owners said Google’s voice assistant was the best. But here is where I felt things got interesting.
This, like all the questions in our study, was not multiple choice. We asked consumers to choose the answer that best fit their opinion. We gave them a simple “none of the above” option and we gave them the chance to pick that they think “voice assistants are useless”. Surprisingly, 29.4% of respondents deliberately chose the option that they think voice assistants are useless. Consumers are a tough crowd, with a lot of convincing to do.
Lastly, we asked consumers what they thought of Bixby and whether they expected the new Samsung smart assistant to be better, worse, or the same. Interestingly, 13.2% of the respondents showed some confidence in Samsung and Bixby saying they think it will be better than Siri, Google Assistant, and Alexa. 38% said they felt it will be the same, while most consumers 43% said they don’t use any of the voice assistants so they have no opinion.
As we dug into this study, we uncovered more insights than I have time to share but the key here is Samsung still remains a solid brand despite the Note 7 issues. Consumers are still showing interest in Samsung’s latest products and the new innovations they are bringing to market. While voice assistants still have a lot of convincing to do in order to get consumers to trust them and use them more, there is enough potential here for Samsung to keep investing in Bixby since voice interfaces and voice assistants will become more valuable and desired features in the coming year.
I’ll have more to share on voice assistants and the voice UI soon as we are about to field our Voice Assistants 2.0 research study.
I have had a chance to work on speech and voice projects since I first interacted with Kaifu Lee at Apple who, in the early 1990s, was brought in to research voice and speech recognition for what would have been used in Apple’s Newton. Not long after it became clear Newton did not have any real legs, Microsoft lured him away from Apple to head up Microsoft’s first serious work on voice and speech recognition.
In the 25 years or so since that time, voice and speech recognition has evolved a great deal and is now used in all types of applications. With the addition of Artificial Intelligence applied to voice, Google, Apple, Microsoft, Amazon, and others, have now been pushing their voice solutions as a platform and new user interface that helps them interact with customers and provide new types of apps and services.
Recently, Amazon opened up the Alexa voice interface to hardware and software vendors to add a voice UI with direct links to Amazons’s apps and services. Apple’s Siri, Google’s Now and Microsoft’s Cortana are also used as voice UIs that work with third party products and are tied back to each company’s services or dedicated applications. In this sense, voice has become an important new platform for companies to innovate on and AI in voice is a viable platform to use when building new apps and services.
Although AI and voice as a platform will continue to be important, I sense a real shift — AR will soon become the most significant new platform for innovation relatively soon.
PokeMon Go introduced AR to a broad consumer audience and the tech world took note. Once they started to put their strategic thinking caps on, they immediately realized the idea of integrating virtual images, video, and information on top of real world settings has a lot of potential.
To date, most AR is in games like PokeMon Go and apps like SnapChat. But the idea of AR becoming an actual platform within an OS, which could drive a host of innovative apps and services, is just around the corner. The most likely platform for AR will develop on smartphones first and eventually extend to some type of glasses or goggles as an extension of the smartphone’s user interface. But, for the next few years, AR will be introduced and integrated into the smartphone experience and make it possible to blend virtual worlds into the real world.
Google already has an Android platform for AR called Tango and Lenovo has brought the first Tango phone to market. However, the Tango platform solution is half-baked and I am not clear how serious Google is about AR, given their first generation of AR smartphones on the market today. They still seem to be pushing harder into VR with Daydream and Tango seems to be more of an experiment. But that might change later this year if Apple comes out with their AR platform, something a lot of people believe Apple has up its sleeve with the next-gen iPhone. We should get an AR update from Google at their I/O developer conference next month.
Given the way Apple attacks markets with new software and uses it to sell new hardware, it makes me think Apple could actually be one of the companies that could bring AR to the mainstream market.
Here is the scenario I believe could evolve for Apple to make AR a household name.
First, I would expect Apple to add specific new hardware features to a next generation iPhone that could include extra cameras, incorporating a 360 degree feature, new types of proximity sensors, a new touch screen more sensitive for toggling between virtual and real worlds and perhaps new audio features such as some type of surround feature that could make a virtual scene come alive.
Second, they would create a dedicated AR software layer that sits on top of iOS that serves as an extended platform tied specifically to any new hardware-related features. That would be followed by a special SDK for developers who could create new and innovative apps for AR on a new iPhone.
If Apple does add AR to new iPhones, I suspect they would pre-seed five or six key developers with the AR SDK during the summer so when they launch the new iPhone in September, they can show off these apps along with the homegrown ones they would create themselves. This is pretty much the roadmap they follow when they introduce any new major device or significant new features for the iPad or iPhone and Apple following this plan is very likely should they use the new iPhone to introduce AR this year.
Given the secrecy of Apple, I doubt we will hear anything about AR at Apple’s Worldwide Developers conference in San Jose in early June.
But what is most important about this, should Apple enter the AR market, is the fact they would provide a powerful new AR platform developers can innovate around and serve as a vehicle to bring AR to the mainstream. This would throw down a major challenge to Google, Samsung, Microsoft, and Amazon to create their own AR platforms and this will become the next major platform gold rush that will drive new tech growth in the next three to four years.
The other company who could bring AR to the masses quickly is Facebook. At their F8 conference this week, Facebook showed off a new camera that will be at the heart of a new AR platform that can be used to add virtual objects to their app.
“Facebook is going to use the camera part of the Facebook app to build a new platform for augmented reality by implementing camera effects. Standard effects already used on other apps such as face masks, style transfers etc. will be available from the start. Users will be able to create their own since it will be an open platform. The new AR platform will be launched as open Beta today.
Facebook hopes to take further advantage of developing technologies such as Simultaneous Localisation and Mapping (SLAM) which allows the camera to plot out where an object is in the real world so AR can seems to be placed accurately in the ‘real world’. Additionally, Facebook is working on technology that allows the conversion 2D stills mages into 3D representations that can be modified with AR. The object recognition that will be introduced to the app means that the camera can ‘recognise’ the size, depth and location of the objct so the object can be manipulated within the AR space.”
The commonality of both Facebook and Apple is the development of an AR platform, an SDK, and the role software developers will play in creating innovative AR apps is what is important to understand. Although voice as a platform will continue to grow and be important, it is my sense AR is really the next major platform we will see the most innovation from in the near future.
The other day, I was reading this fascinating and scary story of a woman in Kenya who thought she was carrying the HIV virus because an app told her so. The app was a hoax but she could not have known it as she had downloaded the app over Bluetooth from a friend and never got to read the reviews that warned about the scam. The BBC story was centered on a report funded and commissioned by the Bill & Melinda Foundation and developed by the Mozilla Foundation in close collaboration with Digital Divide Data.
The report offers a very interesting snapshot of what technology and smartphones mean in a country like Kenya. The good is how smartphones can help Kenyans, especially low-income earners, not to feel left out of society. The bad is online gambling becoming a larger issue. The ugly is what consumers in emerging market discover, especially when new to smartphones, is their experience is shaped by trends set by larger international organizations that control the ecosystem.
This last point made me think about how different the smartphone market is compared to the feature phone market and not just because the hardware is different.
Feature phones were More Customized than Smartphones
Going back to 2009/2010, emerging markets were the future of mobile as the overall mobile phone market was made up by dumb phones and manufacturers had a more focused portfolio for emerging markets. The race to control emerging markets was very much open, as Nokia fiercely defended its position in markets such as Africa, Latin America, and Asia.
What was unique about Nokia was that, even back then, their focus was on services as well as hardware. While lowering the price of feature phones, Nokia focused on lowering the requirements for data consumption, making some of their services such as music and maps available offline. Nokia also implemented a financing service for small businesses as well as a service focused on allowing users to send money called Nokia Pay.
These were the years when most of the hardware was not yet touch-based and was customized with keyboards that reflected the different languages. Applications were also pre-loaded to reflect local cultural preferences. Aside from possibly Latin America, which endured for years the hand-me-downs from the US, consumers in emerging markets were given devices that mostly reflected their needs.
These were also the years before local manufacturers, empowered by Android, started to make a dent in the market share of tier-one players and fragment the market in such a way that replicating what Nokia had became impossible due to the lack of economies of scale.
With the shift to smartphones and the pressure on margins, many vendors are prioritizing high growth markets such as China and India while trying to serve the rest of the emerging markets by leveraging what they have in the portfolio rather than customizing to the country’s needs.
Software: One Size Fits All
With the advent of smartphones, touch, and the shift to software, customization was no longer needed to be able to sell in a specific country. One phone model was shipping across more markets than ever before as most settings were delivered via software. Apps were no longer pre-installed but could be accessed through app stores that offered more international content than they did local. While software might overcome language barriers, it has less success overcoming cultural differences. Apps suitable in America or Europe might not be so in the Middle East or Africa where much of the female population is highly dependent on the men in their lives to grant them access to technology, for instance.
While the size of the emerging market population is still very appealing to hardware vendors, it is not always so compelling to developers and service providers. Tier-one developers and service providers might lack the cultural knowledge to customize and they might see the connectivity challenges and low-income barrier as issues that will always dampen their opportunity, making the investment less than worthwhile.
Many emerging markets are also mobile-only rather than mobile-first markets, making the relationship consumers have with technology quite unique. Many emerging market smartphone users have no measure of comparison for what the digital world can deliver, which makes them vulnerable to exploits. The case reported by the BBC is a very good example. Esther did not know her phone could not possibly diagnose whether she had HIV through the reading of her fingerprint on the screen. For all she knew, technology is that good.
Are Emerging Markets a Duty or an Opportunity?
Tech giants cannot ignore emerging markets in their path to world domination. Google tried through the Android One program to lower the price of smartphones in emerging markets so Android could continue to grow. However, the strength of local, combined with the little differentiation the program was giving to the devices, lead to a weak value proposition both for partners and customers alike. Plus, focusing on an online channel in markets that mainly sell via small mom and pop shops did not really help. Google also focused on improving connectivity by flying internet balloons — an endeavor that is taking much longer than first anticipated to become a reality.
Facebook started a couple of years ago with tweaking its user experience for emerging markets so the content the user was looking at was prioritized and loaded first over side stories which would not be loaded. It also launched an accelerator program to come up with ideas that make advertising rewards relevant to local users.
Finally, Facebook also focused on connectivity first with Free Basics. Users do not pay for using Facebook and other apps but some governments, like India, found it too limited. More recently, Facebook launched Express Wi-Fi in India, as a renewed attempt to offer connectivity at minimum cost by the deployment of public Wi-Fi.
The hurdles both companies have faced, however, underline the challenges of looking at emerging markets from the comfort of Silicon Valley. International telcos with interests in emerging markets such as Telenor, Megafon, and Vimpelcom (now Veon) are also trying to get a slice of the pie and they might just have the advantage of having a ton of data on the very consumers they want to serve.
The Win-Win when Tech Improves Life in Emerging Markets
Rather than focusing on lowering costs or lowering services requirements so consumers in emerging markets can afford to buy devices and subscribe to services, tech companies should focus on improving life in emerging markets. Technology should be used to improve education, eradicate diseases, improve housing and transportation and ultimately create more wealth and empower people to become a potential customer for Amazon, Google or Facebook. While the final goal might be the same and generate market growth, I would very much argue the means to an end would be much more rewarding for emerging markets.
As your mother or other caregiver likely told you as a child, just because you can do something, doesn’t mean you necessarily should.
So, given last week’s news that Apple has obtained a permit to test drive three autonomous cars on public streets and highways in California, the existential question that now faces the company’s Project Titan car effort is, should they build it?
Of course, the answer is very dependent on what “it” turns out to be. There’s been rampant speculation on what Apple’s automotive aspirations actually are, with several commentaries suggesting that those plans have morphed quite a bit over the last few years, and are now very different (and perhaps more modest) than they originally were.
While some Apple fans are still holding out hope for a fully-designed Apple car, complete with unique exterior and interior physical design, a (likely) electric drivetrain, and a complete suite of innovative software-driven capabilities—everything from autonomous and assisted driving features, the in-vehicle infotainment (IVI) system, and more—other observers are a bit less enthusiastic. In fact, the more pragmatic view of the company creating autonomous driving software for existing cars—especially given the news on their public test driving effort—has been getting much more attention recently.
Regardless of what the specific elements of the automotive project turn out to be, there remains the philosophical question of whether or not this is a good thing for Apple to do. On the one hand, there are quite a few major tech players who are trying their hands at autonomous driving and connected car-related developments. In fact, many industry participants and observers see it as a critical frontier in the overall development and evolution of the tech industry. From that perspective, it certainly makes sense for Apple to, at the very least, explore what’s possible, and to make sure that some of its key competitors can’t leapfrog them in important new consumer technologies.
In addition, this could be an important new business opportunity for the company, particularly critical now that many of its core products for the last decade have either started to slow or are on the cusp of hitting peak shipment levels. Bottom line, Apple could really use a completely different kind of hardware hit.
The prospect is particularly alluring because some research conducted by TECHnalysis Research last fall shows that there is actually some surprisingly large pent-up demand (in theory at least) for an Apple-branded car. In fact, when asked about the theoretical possibility of buying just such an automobile, 12% of the 1,000-person sample said they would “definitely” buy an Apple car. (Note that 11% said they would definitely buy a Google-branded car.) Obviously, until such a beast becomes a reality, this is a completely speculative exercise, but remember that Tesla currently has a tiny fraction of one percent of car sales in the US.
Look at the possibility of an Apple car from another perspective, however, and a number of serious questions quickly come to mind. First, is the fact that it’s really hard to build and sell a complete car if you’re not in the auto industry. From component and supplier relationships, to dealer networks, through government-regulated safety requirements, completely different manufacturing processes, and significantly different business and profitability models, the car business is not an easy one to successfully enter at a reasonable scale. Sure, there’s the possibility of finding the auto equivalent of an ODM (Original Device Manufacturer) to help with many of these steps, but there’s no Foxconn equivalent for cars in terms of volume capacity. At best, production levels would have to be very modest for an ODM-built Apple car, which doesn’t seem like an Apple thing to do.
Speaking of which, the very public nature of the auto business and the need to reveal product plans and subject products for testing well in advance of their release is also very counter to typical Apple philosophy. Similarly, while creating software solutions for existing car makers is technically intriguing, the idea of Apple merely supplying a component on products that are branded by someone else seems incredibly unlikely. Plus, most car vendors are eager to maintain their brand throughout the in-car experience, and giving up the key software interfaces to a “supplier” isn’t attractive to them either.
So, then, if it doesn’t make sense or seem feasible to offer just a portion of an automotive experience and if doing a complete branded car seems out of reach, what other options are left? (And let’s be honest—in an ideal situation, autonomous driving capabilities should be completely invisible to the driver, so what’s the brand value for offering that?)
Theoretically, Apple could come up with some type of co-branded partnership arrangement with a willing major car maker, but again, does that seem like something Steve would do?
There’s no doubt Apple has the technical ability and financial wherewithal to pull off an Apple car if they really wanted to, but the practical challenges it faces suggest it’s probably not their best option. Only time will tell.
Late last week, Bloomberg published a selective set of financial numbers from Uber, which had decided to pre-empt inevitable leaks by speaking directly to reporters. The numbers were partial and subject to a number of caveats but still paint an interesting financial picture of Uber at this stage in its history. It’s important both to understand what’s really being reported here and what it means about Uber and the future of both the company and the broader ride-sharing industry.
One of the key narratives I regularly encounter is surrounding the Millennial demographic and Apple. In our most recent millennial study, we included some sentiment questions about Apple we think give us some insight into the current mindset of millennials around Apple hardware. There are many data points collected by us and many other researchers to suggest Apple hardware is still highly desireable by this demographic and repurchase intent for iPhones in particular remains high among millennials. I have no data to suggest this dynamic will change and there was certainly a time when this group held Apple in the highest regard from an innovation standpoint. One of our questions was intended to see what 18-24-year-olds specifically felt about Apple when it comes to innovation.
When I read that Walt Mossberg would be retiring, it reminded me of how much has changed in the way consumer technology products have been reviewed over the years. I write this as one who has been on both sides: developing products that ultimately were reviewed and writing my own column for twelve years that reviewed products for the now defunct San Diego Transcript.
In the late eighties, as technology products began to appeal to non-technical consumers, the only place where they could go for buying advice was the numerous technology magazines. The magazines did a great job of evaluating the technical details of computers, printers, and other complex devices, with some periodicals even creating their own test labs.
But, for the most part, the reviews were written by those who valued a product by how many features it contained. The reviewers appreciated technological wizardry above all else. So, the articles were filled with graphs and tables with checkmarks comparing the plethora of features each product had, usually awarding the editor’s choice to the product with the most checkmarks. It was assumed the customers would find the products as easy to use as they did.
For the non-technical reader, getting through each article could be a challenge with the new terminology and abbreviations that were used by the industry. I remember trying to keep straight the different units of memory, data speed, and processor speeds.
As a product designer, it was frustrating to see a product reviewed and rated based on the number of features it had, even when many of those features would never be used. I saw how the magazines had influenced the design of new products. Design engineers and marketing people would tend to pile on feature after feature without much thought to usability. That made products take longer to design, harder to use, and less reliable.
In 1991, Walt Mossberg created a much different approach to product reviews that not only made it easier to assess a new product but also changed how products would be designed.
He would look at products, not based on the number of features, but on their practicality and usability. He was one of the first to understand that these products would find a much larger audience among those who might not be technically inclined and they needed to be assessed differently. He took a position as an advocate for the user and found a receptive audience by reminding his audience not to blame themselves for a product being hard to use because they were not alone.
When I was writing my book, “From Concept to Consumer”, I asked Walt to describe the attributes of what he considered to be an excellent product. He said, “It is a product so useful in function and clear in its operation that its user, within days or weeks, wonders how she ever got along without it. This is not the same as having long lists of features, specs, speeds and feeds. In fact, my rule is that, if a product claims to have, say, 100 features, but an average person can only locate and use 11 of them in the first hour, then it has 11 features.”
That was the basis for his judging products. Because of his ability to understand products from the position of the consumer, his observations were much more relevant and useful. From his post at the Wall Street Journal, his influence was widely felt. Companies knew his reviews could make or break a product or even a company.
Walt was also instrumental in advocating for the consumer beyond just products. He saw how cellular providers were restricting product advancements and compared them to Soviet ministries.
Walt, along with David Pogue of the New York Times, the late Steve Wildstrom of BusinessWeek (and a co-founder of Tech.pinions), and Ed Baig of USA Today, were among the first to review major new products. The four were courted by big name companies such as Apple, Samsung, Sony, and others so that their reviews would appear at nearly the same time. With their columns published in each Thursday’s edition of their respective publications, the marketing people, engineers, and company executives would frantically wait for the first edition to see how their product fared, much like the cast of a Broadway show reads their reviews the morning after opening night.
On a personal note, I always found Walt, Steve, and Ed to be thoughtful, insightful and fairminded. While one might disagree with their product assessments, they were always respectful and considerate. If they encountered a problem with a product, they’d get back to the company and get their comments but reported their complete experiences without omissions. They took their job and the impact of what they wrote with great responsibility. And they would not waffle but gave their opinions and backed them up with facts. David Pogue does do reviews but with a more entertainment focus.
In recent years, as gadget blogs replaced newspapers for our source of new product news, the number of reviews have multiplied, although the quality seems to have fallen. Many are done by those with limited product experience and often reflect their own biases without thinking from the position of the consumer. I’m often appalled at how inaccurate they are about products and technology I know well.
There are good sites with in-depth reviews, such as Digital Photo Review, PC Week, Tom’s Hardware, The Gadgeteer, iLounge, The Verge, the Wirecutter (owned by the New York Times), and many others. Many of these sites now derive revenue from their reviews by linking the products to Amazon to receive referral fees.
So, while we have more sources, they will be hard to replace the wisdom of a few good writers that avoid parroting press releases and take a very thoughtful approach to assessing new products, based on their years of experience.
In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss Huawei’s recent analyst summit event, difficulties facing Le Eco, and the overall opportunities and challenges for Chinese vendors to break into the US and WW markets in a meaningful way.
Apple’s recent acknowledgment it is planning a design reset on its Mac Pro desktop was a much needed shot in the arm for longtime Apple loyalists. Make no mistake, the company’s decision to go back to the drawing board on its professional desktop has less to do with the bottom line and more to do with pleasing its base, full stop. That said, there is still real money to be made in the desktop market, especially at the high end. Moreover, there are some interesting personal computing developments happening in this space. Which leads me to wonder: Is Apple’s yet-to-be-revealed Mac Pro rethink going to be ambitious enough?
World Wide Desktop Market Numbers It’s widely understood the PC market has been in long-term decline. The combined desktop plus notebook PC market peaked in 2011 when the industry shipped 331 million units worldwide. However, desktop shipments started to decline years earlier, peaking in 2007 at 161.1 million units (versus 105.3 million notebooks). By 2016, the total PC market had slipped to 246.6 million units, with desktops at 100.7 million. While those numbers may seem bleak, the reality is the world has come to the realization it still needs PCs and the market is stabilizing (as noted in IDC’s preliminary estimates for the first quarter of 2017). IDC has forecasted very modest growth for notebooks in 2017 and beyond, while desktop contraction will slow to low single digits over the course of the next two years.
Declining shipments don’t tell the whole story, though. As the market has contracted, the biggest market players have moved aggressively to grab more share. Lenovo, Dell, and HP have been particularly vocal about this and for many, many quarters, Apple’s Tim Cook announced during the company’s quarterly earnings calls that Apple’s Mac shipments were growing while the industry declined overall. Since the market’s peak in 2011, market share consolidation has been swift. In that year, the top five vendors accounted for 49% of total shipments. By 2016, the top five—Lenovo, HP, Dell, ASUS, and Apple—accounted for 72.4% of total shipments. The shift has been less drastic in desktops, where smaller players can still compete, to some degree. Still, in 2011 the top five vendors accounted for 46.3% of shipments and, by 2016, the top five—Lenovo, HP Inc, Dell, Acer, and Apple—made up 59.1% of shipments.
In 2016, Apple shipped about 3.3 million desktops worldwide, for a 3.3% share and fifth place in terms of total shipments (Lenovo was number one with 19.4 million units). However, as is often the case, Apple’s average selling price was notably higher than anyone else in the market. Apple’s ASP was $1,384, versus $505 for Lenovo, $554 for HP Inc, $557 for Dell, and $440 for Acer. All told, Apple’s desktop business generated revenues of $4.5 billion dollars. That’s roughly 9% of the entire desktop market’s revenues on 3.3% of shipments. Small versus iPhone revenues but a serious business nonetheless.
Forward-Looking Designs So we’ve established Apple is embarking on a re-do of the Mac Pro primarily to appease its most loyal customers, not for the money. But that the money is pretty good, too. I’d argue there is at least one more reason Apple needs to focus its attention on the desktop. It’s an area where the company risks falling behind: Design Cachet.
Back in 2014, I was at the event when Apple launched the first Retina 5K iMac. It was gorgeous, expensive, and total overkill for my needs. But I wanted it just the same. At a time when the industry was shipping 5K monitors for $2,500 (despite a dearth of PCs fully capable of supporting them), Apple jumped the line by integrating the necessary technology to power that many pixels right in the box. The intention was never to sell tens of millions of them but to make a clear statement about Apple’s ability to create a desktop people aspired to own. The starting price was $2,499, shipping into a market where the average selling price of a tower desktop was $446 and the average price of an all-in-one PC was $890.
One of the tidbits of information to come out of Apple’s press meeting on the Mac Pro was the company isn’t looking at touch technology for the Mac because it says its clients aren’t asking for it. But I have to wonder how many of Apple’s pro customers were asking for a 5K iMac back in 2014 (or how many were asking for a TouchBar in 2016). Just as important, I would argue that, by refusing to explore touch on the desktop, Apple is automatically ceding the space to competitors such as Dell, which recently announced its long-gestating Smart Desk, and Microsoft, which is now shipping its Surface Studio (starting price: $2,999). Neither of these products will ship in large volumes but I suspect a small but vocal group of professionals will find them indispensable, while the rest of us will find them aspirational. And neither support touch as a mere technical gimmick but instead to enable new usage models. I get that nobody wants to reach up and tap a vertical desktop screen with their finger. But there are plenty of interesting things you can do with a finger, a pen, or a dial on a screen that shifts to the horizontal plane.
Apple obviously takes design very seriously. So seriously, in fact, it refuses to rush the new design of its next Mac Pro. The company doesn’t expect to ship the new Mac Pro this year (although it will ship updated versions of the existing Mac Pro as well as more pro-focused iMacs). Such attention to detail is part of what draws so many Mac fans to the brand but it also makes it very hard for the company to respond quickly to shifts in the market. If Apple really does expect one or more of its Mac products to be vital pro-level tools well into the future, the company must consider what future pro desktop owners will need before they know they need it. As such, I can’t help but think that they shouldn’t disqualify touch from the desktop equation entirely.
There is a battle looking for the next platform. Specifically the next platform that attracts developers. Right now, the race is on to create machine learning platforms that attract customers to use the latest generation of tools and commit to a cloud platform with machine learning advantages over the other cloud platform. From an open model, the leaders here are Amazon, Microsoft, and Google.
It’s that time of year again when the big developer events start. With Twitter having ended its developer events, we’re left with Facebook, Microsoft, Google, and Apple as the big four consumer-centric developer conferences. Next week, the season kicks off with Facebook’s F8. Facebook’s past events have been a fascinating mix of short-term developer-centric announcements and bigger pronouncements on Facebook’s future roadmap, so they’re generally pretty interesting affairs. Making predictions about these things is always a risky business so I’m going to share what I’m looking for next week, rather than making firm predictions about what we’ll actually see.
An Update on Bot Strategy
One of the big themes out of both Microsoft’s and Facebook’s developer events last year was bots, which went through a hype cycle right around the time of their events. A year later, we’ve seen interesting developments on both sides, with Microsoft mostly focusing on enabling third parties but also dabbling in several of its own AI chatbots, including in emerging markets like India and China. By contrast, Facebook has largely had to walk back its bot strategy from last year, conceding (in a series of moves) that its original conception was off the mark.
In September of last year, Facebook adjusted its bot strategy for the first time, with David Marcus announcing Facebook was investing in new capabilities and doing more to help developers create successful experiences with bots. Part of that announcement was an acknowledgement that the original vision for bots as broad-based replacements for apps was misguided and it needed to be narrower. Since then, we’ve seen the M assistant (which Facebook first envisaged as a sort of assistant bot within Messenger) become something narrower: a helpful tool that pipes up within conversations, becoming less like a bot in the process. In March, we saw Facebook do more to focus on menus and other user interface elements, again making their bots less bot-like and more app-like (and actually more like the messaging-based apps so popular in Asia).
I would expect the revised bot strategy to be a theme at Facebook, with announcements around group chatbots so developers can integrate their bots into conversations between real people along the same lines as M. But we also need to see clarity on the role Facebook now sees bots playing in the broader landscape of the web, Facebook pages, interaction with humans through Messenger, and dedicated apps. We need to see more realism about what bots can really be good at and which developers should be focusing on them. That focus was missing in last year’s more grandiose vision.
More Creative Ad Products
As a user, I don’t necessarily want to see more ads in more places on Facebook but as it’s facing saturating ad load in the core product, Facebook needs to be more creative about where ads can go next without destroying the user experience. One area I expect we’ll see some announcements is its Camera Effects feature, which is fairly limited for now but could easily be opened up to developers along the lines of Snapchat’s Sponsored Filters. That should be interesting and will provide new ad opportunities but I’d love to see more creative ideas from Facebook which don’t feel like they’re just being cloned from Snapchat.
Social and VR Coming Together
When Facebook bought Oculus, it acquired a largely gaming-focused VR outfit and it has continued down that path with the products released since. But the vision it outlined at the time saw VR as not just a gaming platform but the next user interface for much more. At last year’s F8, we saw some proofs of concept around social experiences in VR, through which people could virtually visit a faraway place with a friend in virtual reality. That starts to get at a vision for VR which is more aligned with Facebook’s core value proposition of connecting people. I’d love to see them iterate on that vision and start to productize some social VR experiences this year.
Monetizing Messenger and WhatsApp
Again, given the ad load issue in the core Facebook product, the company needs to be leaning more heavily on its other apps to drive growth. While Instagram has already become a massive revenue generator (though the company continues to keep the numbers to itself), the same can’t be said for Messenger and WhatsApp. It’s increasingly clear that ads are the business model Facebook will pursue to monetize Messenger (and we’ve seen the first examples of that) but it’s less clear how it will monetize WhatsApp, whose founder has always pooh-poohed ads as a business model.
The vision Facebook outlined for monetizing these messaging platforms last year was around businesses connecting with people, which obviously aligns, to some extent, with the bot strategy. So far, the ad products have been a little bland and verging on the invasive. Messaging is a uniquely personal and private space relative to the noisy commons of Facebook’s News Feed, so it needs to be careful in how it approaches advertising and monetization on these two platforms. With both now having over 1.2 billion users and therefore, twice the size of the last audience number we have for Instagram, they could be generating significant revenue at this point but aren’t. It would be great to hear more about how that will change without ruining the user experience.
The last area I’m really curious about is hardware, where Facebook now has a dedicated team under former Googler Regina Dugan. Her focus – and that of the team she leads – has always been cutting edge innovation of the kind that doesn’t necessarily make it into products. I expect that’s where we’ll continue to see the bulk of the hardware effort at Facebook heading. But I think we could easily see some examples of those projects at F8 and I’d love to see both why Facebook is pursuing some of these areas which seem disconnected from its core business and how it will make those efforts pay off over time. We could see licensing models or partnerships and, in some cases, we may even see Facebook go beyond its current small-scale dabbling in hardware through Oculus into something more mainstream and scalable.