This week, I was driving in my neighborhood when I spotted that most American of sights: a bunch of kids running a lemonade stand, waving signs and trying to flag down passing cars. In some ways, it seemed like a great business opportunity – the temperatures where I am have rarely dipped below the high 90s Fahrenheit lately. And yet I didn’t stop – not because I don’t like lemonade (or kids), but because I simply don’t carry cash anymore, and I’m fairly sure the neighbor children weren’t taking credit cards. That got me thinking about all the people and sectors of our economy which are still dependent on cash, and how they might be affected by our increasingly cashless society.
Cash is in Decline
Whether anecdotally or based on solid data, I think most of us have a sense that cash is in decline. One study from last year suggests that cash is the preferred payment method of just 11% of US consumers, with 75% preferring cards. In other markets such as China, cash is dying out even more quickly, with mobile payments increasingly eating into both its share and that of cards. Though my local dry cleaner in New Jersey was a rare (and suspicious) exception, I very rarely come across businesses that don’t take cards, to the extent that it now really takes me aback when it happens. For many of us these days, credit and debit cards and to a lesser extent mobile payments are making cash largely irrelevant. I still have a huge jar of loose change I accumulated over many years and which now mostly gets used for the occasional school lunch or visits from the tooth fairy, but not much else.
But not for Everyone
However, assuming that this pattern holds for everyone would be a mistake. There are still big sectors of the economy, and large groups of people, who remain heavy users of cash and heavily dependent on it, and as others move away from it, that’s increasingly going to cause them problems. Sadly, this likely applies most to some of the more vulnerable and marginalized parts of our society, who will be least in a position to make the changes necessary to keep up as the rest of society moves on.
Here are just a few examples of people or businesses still dependent on cash:
Homeless people and others who ask for money on the streets
Charity workers soliciting cash donations in public areas
Manual and casual laborers who get paid in cash, either for convenience or for tax reasons
Those who don’t have bank accounts or credit cards, including many without regular incomes
The very young, also unlikely to have bank accounts
Anybody who works based on tips, from waiters and waitresses to maids and barhops in hotels to valet parkers
Small local retailers and restaurants who can’t justify high credit card processing fees on mostly small purchases.
The list could go on much longer than that, but the point is that there are those who are in some cases heavily dependent on cash and relatively powerless to make the changes necessary to keep up. These are often among the poorer and least educated people in our society, and therefore those with least access to technology, the traditional banking infrastructure, or information about how to adapt.
Tech Has Offered Partial Solutions
The tech industry has offered partial solutions, but mostly in self-serving ways. Payment processing company Square has transformed many a small retailer or producer from a cash-only business to one that can take credit cards and even Apple Pay, and created ways for those without traditional cards to carry balances and make payments with their phones. Amazon has introduced methods for those who deal mostly in cash to obtain one-off or refillable cards to be used to pay for things on its site. Venmo has turned erstwhile cash transfers into electronic payments. But these solutions mostly tear down limits to the addressable markets for their own products, without necessarily expanding economic opportunity or promoting inclusion, while also often being based on internet and mobile technology not available to all.
But Needs to do More
What we need is solutions for the rest of society, and especially for those without access to the internet and phones to be able to receive non-cash payments. What about an app that allows patrons or would-be donors to set up a transaction in an app, and allows the recipient to walk into a bank or store to pick it up in cash with a privately shared code? Or an app that allows users of basic smartphones to receive payments and carry a balance without creating an ongoing relationship with the payer? What about a service that would provide meals, access to beds and other facilities, or other needed items to the homeless based on donations from smartphone users? Technology has such an enormous potential to reduce friction and make payments simpler, but what we need are innovations that do the same on the receiving end, including in ways that don’t themselves require technological solutions.
I feel like calling on the tech industry to step up to big societal problems has been something of a theme lately in my columns, but I can’t help but think that this is yet another area where those already most on the fringes of society will just be left further marginalized by technology rather than brought into the fold by it. It doesn’t need to be that way: the bright minds who have created so many technologies that help us deal with our “first world problems” can surely find ways to help those with more biting and pressing challenges as our society continues to evolve.
Much has been said and speculated around the pricing for Apple’s iPhone 8. iPhone rumors is especially hot in the summer thanks to the slow news cycle and publications hunting down every story and angle which will generate interest and the iPhone 8 fits that need.
This is one of the most asked questions I get from clients, readers, and in particular the Wall St. community of investors. There is a range of concerns which I will cover and share my thoughts and perspective on, but I think it’s helpful to remember why–besides being a 10th anniversary year–this particular iPhone launch is of the utmost interest, and perhaps importance to Apple as a company.
As of late, I’ve been wondering if we are thinking about the evolution of automobiles, transportation, commuting, etc., completely backward. I’ve been reading dozens of reports, and research from component and supply chain vendors on electric vehicles and it is clear a big trend to move to electric powertrains is upon up. However, everyone assumes for the moment, the future autonomous vehicles and commuter transportation systems will look similar to the cars we know today. I believe a form factor shift will take place as well at some point.
A Nielsen Company audience report published in 2016 revealed that American adults devoted about 10 hours and 39 minutes each day to consuming media during the first quarter of 2016. This was an increase of exactly an hour recorded over the same period of 2015. Of those 10 hours, about 4½ hours a day are spent watching shows and movies.
During the same year, the Deloitte Global Mobile Consumer Survey showed that 40% of consumers check their phones within five minutes of waking up and another 30% checks them five minutes before going to sleep. On average we check our phones about 47 times a day, the number grows to 82 times if you are in the 18-24 age bracket. In aggregate, the US consumers check their phones more than 9 billion times per day.
Any way you look at it, we are totally addicted to screens of any form, size, and shape.
While communication remains the primary reason we stare at our screens there are also tasks such as reading books or documents, playing card or board games, drawing and writing that we used to perform in an analog way that are now digital. And, of course, there is content consumption. All adding up to us spending more time interacting with some form of computing device than with our fellow human beings in real life.
I see three big technology trends in development today that could shape the future of our screen addiction in very different ways: ambient computing, Virtual Reality and Augmented Reality.
Ambient Computing: the Detox
Ambient computing is the experience born from a series of devices working together to collect inputs and deliver outputs. It is a more invisible computer interaction facilitated by the many sensors that surround us and empowered by a voice-first interface. The first steps of ambient computing are seen in connected homes where wearables devices function as authentication devices to enable experiences such as turning the lights on or off or granting access to buildings or information. The combination of sensors, artificial intelligence, and big data will allow connected and smart machines to be much more proactive in delivering what we need or want. This, in turn, will reduce our requirements to access a screen to input or visualize information. Screens will become more a complement to our computing experience rather than the core of it.
In order to have a feel for how this might impact average screen time, think about what a device such as a smartwatch does today. While a screen is still involved, it is much smaller and it shows the most important and valuable information to you without drawing you into the device. I often talk about my Apple Watch as the device that helps me manage my addiction. It allows me to be on top of things, without turning each interaction into a 20-minute screen soak. Another example is the interaction you might have with Google Home or Alexa when you inquire about something. Today, for instance, I asked for the definition of “cabana” as my daughter wanted to know. I got what I needed in less than 30 seconds: “a cabin, hut, or shelter, especially one at a beach or swimming pool.” Had she gone on Google Search to find the definition, I guarantee it would have taken a good 10 minutes between reading through the results and looking at pictures, with the effectiveness of the search not being any better because of the screen.
While not a total cure, ambient computing could provide a good detox program that will allow us to let go of some screen time without letting go of our control.
Virtual Reality: the Ultimate Screen Addiction
Virtual Reality is at the total opposite spectrum of Ambient Computing as it offers the ability to lose yourself in the ultimate screen experience. While we tend not to talk about VR as a screen the reality is that, whatever experience you are having, it is still delivered through a screen. A screen that rather than being on your desk, on your wall, or in your hand is on your face through a set of glasses of various shapes.
I don’t expect VR to be something we turn to for long period of times, but if we have ever complained about our kids or spouses having selective hearing when they are gaming or watching sports we got another thing coming!
There are talks about VR experiences that are shared with friends but if multiplayer games are something to go by, I am expecting those “share with friends moments” to be a minor part of the time you will spend in VR. With VR being so much more immersive I think the potential to be in an experience with someone you do not know like you do with traditional gaming, might be a little more involved and overwhelming. Coordinating with actual friends might require too much effort worth making if you are experiencing a music or sports event but maybe not so much if you are just playing a game.
Escapism will be the biggest drive for consumer VR which is the biggest reason for wanting to be cut off from reality.
Augmented Reality: the Key to Rediscovery
Augmented Reality is going to be big, no question about it. Now that Apple will make it available overnight to millions of iPhones and iPads as iOS 11 rolls out, consumers will be exposed and engaged with it.
What I find interesting is the opportunity that AR has to reconnect us to the world around us. If you think about the big excitement around Pokemon Go was that people went outside, walked around, exercised. Because humans do not seem to be able to do anything in moderation, that feel good factor vanished quickly as people were paying more attention to the little creatures than to where they were going culminating in incidents of trespassing, injuries, and fights!
That said, I strongly believe that there are many areas that AR can help with in our rediscovery of the world around us from education, to travel, to nature. Think about Google Translate and how it helps lower the barrier of entry to travel in countries where you do not speak the language.
The trick will be not to position the AR experience as the only experience you want to have. AR should fuel an interest to do more, discover more, experience more.
Of course, developers and ecosystem owners are driven by revenue rather than the greater good of humanity. Yet, I feel that lately the level of concerns around the impact technology is having on our social and emotional skills is growing enough to spur enough interest to drive change.
Ultimately, I believe that our addiction to screens is driven by a long list of wrong reasons. Our obsession feeds off our boredom, our feeling that idle time is unproductive time, and the false sense of safety of connecting to others through a device rather than in person. New technologies will offer the opportunity to change if we really want to.
Last month I wrote a piece in the Think.tank stating that I believed Apple was going to be doing major work in AR to advance the personal user interface by mixing virtual and real worlds into their mobile platforms. We are already seeing some fascinating AR examples coming out of early developers who have been using AR kit to create early AR applications for IOS 11. (Check out @madeforarkit on Twitter to see many of these early examples.) In this article, I also stated that while I believe the iPhone is the best vehicle for Apple to deliver AR at first, I also suggested, based on some patents that eventually Apple will add some form of eyewear tied to AR and MR to their portfolio. These new products will offer an even new way to interact with mixed reality in the real world.
Having just spent a few weeks vacationing in a part of the US where an abundance of corn and limestone-filtered water, along with a predilection for distilled beverages led to the creation of our country’s most famous native spirit—bourbon—I’ve regained a sense of life’s priorities: family, food and fun. (And for the record, the Kentucky Bourbon Trail is a great way to spend a few days exploring that part of the world—especially if you’re a fan of the tantalizing golden brown elixir.)
Of course, while I was there, I also couldn’t help noticing what sort of technology was being used (or not) and how people think about and use tech products in that part of the country.
Within the many distilleries I visited, the tech influence was relatively modest. Sure, there were several temperature monitors on the mash and fermentation vats, a few industrial automation systems, and I did see one control room with multiple displays and a single Dell server rack that monitored the process flow of one the largest distilleries, but all-in-all, the process of making bourbon is still decidedly old school. And for the record, that just seems right.
As with many traditional industries, the distilled spirits business has begun to integrate some of the basic elements of IoT technologies. I have no doubt that it’s modestly improving their efficiency and providing them with more raw data upon which they can do some basic analysis. But it also seems clear that there are limits to how much improvement those new technologies can make. With few exceptions, the tools in place appeared to be more focused on codifying and regulating certain processes than really driving major increases in production.
Ensuring consistent product quality and maximizing output are obviously key goals across many different industries, but the investments necessary to reach these outcomes and the return on investment isn’t necessarily obvious for any but the largest companies in these various industries. And that’s a challenge that companies offering IoT solutions are going to face for some time.
What became apparent as I observed and thought about what I saw was that the technology implementations were all very practical ones. If there was a clear and obvious benefit to it, along with a comfort factor that made using it a natural part of the distillation, production or bottling process, then the companies running the distilleries seemed willing to deploy it. And if not, well, that’s why there’s still a of traditional manufacturing processes still in place.
That sense of practicality extended to the people I observed as well. People I saw there were using products like smartphones and other devices as much as people on the coasts—heck, my 93-year-old mother-in-law has an Amazon Echo to play her favorite big band music, uses an iPad every day to play games, and maintains a Gmail account to stay in touch with her children, grandchildren, and great-grandchildren—but the emphasis is very much on the practical, tool-like nature of the devices.
I also noticed a wider range of lower-cost Android phones and less iPhones being used. Of course, much of that is due to income discrepancies. The median household income in the commonwealth of Kentucky is $43,740, which is 19% lower than the US median of $53,889 according to the latest US Census Bureau data, and almost half as low as the San Francisco county median income of $81,294. Given those realities, people in many regions of the US simply don’t have the luxury to get all the latest tech gadgets whenever they came out. Again, they view tech products as more practical tools and expect them to last.
There’s also a lot more skepticism and less interest in many of the more advanced technologies the tech industry is focused on. Given the limited public transportation options, cars, trucks and other forms of personal transportation are extremely important in this (and many other) part(s) of the country—I’m convinced I saw more car dealership ads on local TV and in local newspapers than I can recall seeing anywhere—but there’s absolutely zero discussion of any kind of semi-autonomous or autonomous driving features. People simply want good-looking, moderately priced vehicles that can get them from point A to point B.
In the case of AI and potential loss of jobs, perhaps there should be more concern, but from a practical perspective, the bigger worries are about factory automation, robotics and other types of technologies that can replace traditional manufacturing jobs, which are more common in many parts of middle America.
Also, the idea that somehow nearly everything will become a service seems extraordinarily far-fetched in places similar to where I visited. That isn’t to say that we won’t see service-like business models take hold in major metropolitan areas. However, it’s much too easy to forget that most of the country, let alone the world, is not ready to accept the idea that they won’t really own anything and will simply make ongoing monthly payments to untold numbers of companies providing them with everything they need via an equally large number of individual services.
As Facebook’s Mark Zuckerberg has started to explore, an occasional view outside the rose-colored perspective of Silicon Valley can really help shape your perspective on the real role that technology is playing (or might soon play) in the modern world.
Another earnings season is upon us: the public tech companies will begin reporting second calendar quarter results later this week, starting with Netflix on Monday and moving on to others like Qualcomm, T-Mobile, and Microsoft later in the week, with others to follow over the next couple of weeks. Here’s what I’m going to be looking for – and what I suggest you look for – as some of the big companies report.
With the release of the Zen architecture and the Ryzen product family for consumer PCs, AMD started down a path of growth in the processor market that it has been absent from for basically a decade. The Ryzen 7 processor family directly targets the Intel Core i7 line of CPUs that have been incredibly dominant and turns the market on its side by doubling core and thread counts at like price points. The platform surrounding the CPU was modernized, leaving very little on the feature list that AMD couldn’t match to Intel’s own. Followed by the Ryzen 5 launch a few weeks later, AMD continued the trend by releasing processors with higher core and thread counts at every price bracket.
More recently the EPYC server and data center processor marked AMD’s first entry for the enterprise markets since Opteron, a move that threatens at least some portion of the 99.5% market share that Intel currently holds. By once again combining higher core counts with aggressive pricing, EPYC will be a strong force in the single and dual-socket markets immediately, leaving the door open for further integration with large data center customers that see firsthand the value AMD can offer compared to the somewhat stagnant Xeon product family.
Though reviews aren’t launching for another couple of weeks, on Thursday AMD showed all of its cards for the summer’s hottest CPU launch, Ryzen Threadripper. With the hyper-aggressive naming scheme to go along with it, Threadripper will be a high-core-count processor and platform, based on the EPYC socket and design, targeting the high-end desktop market (HEDT) that Intel has had to itself for nearly that same 10-year window. Intel was the first to recognize the value of taking its Xeon product family, lowering features a slight degree, and then sell it to eager PC enthusiasts that want the best of the best. Families like Broadwell-E, Sandy Bridge-E, and most recently, Skylake-X that was released in June, have dominated the small, but very profitable, segment of the market for overclockers, extreme gamers, and prosumers that need top level multi-threaded performance.
CEO Lisa Su and CVP of marketing John Taylor took the wraps off the clocks, core counts, and prices in a video launched on the company’s YouTube page, along with a blog post from SVP of compute Jim Anderson, showing confidence in AMD’s message. Available in early August, Threadripper will exist as a 12-core/24-thread 1920X with frequencies as high as 4.0 GHz for $799 AND as a 16-core/32-thread 1950X hitting the same 4.0 GHz for $999. No doubt these are high costs for consumer processors, but compared to the competing solutions from Intel, AMD is pricing them very aggressively, following the same strategy that has caused market disruption with the Ryzen 7 and 5 releases. Intel’s $999 Core i9-7900X is a 10-core/20-thread part, putting it at a disadvantage for multi-threaded workloads despite having an advantage in single threaded performance based on architectural design.
Impressive speeds aside, what does this mean for AMD as we get into the heat of summer? I expect Ryzen Threadripper to be a high-demand product compared to the Skylake-X solution, giving AMD the mindshare and high margin space to continue seeing the benefits of its investment in Zen and the Ryzen family. Intel had already reacted to the Ryzen 7 launch with price drops and adjustments to the timing of Skylake-X but arguably not to the degree necessary to maintain price-to-performance leadership across the board. Threadripper will offer heavy multi-taskers, video editors, 3D animators, and other prosumer style users a better solution at a lower price point based on performance estimates.
AMD did release one benchmark metric for us to analyze until full reviews come out in August. Cinebench R15 is an industry standard test that runs a ray-traced rendering pass on a specific data set, timing it to generate a higher-is-better score. The Core i9-7900X, the current flagship part from Intel that sells for $999, generates a score of 2186. The upcoming Threadripper 1920X (12-cores) scores 2431, 12% higher than the Intel processor that costs $200 more. Like-for-like pricing competition from the Threadripper 1950X (16-cores) scores 3046, a full 39% faster than what Intel currently has on the market.
Intel has on its roadmap for releasing Skylake-X parts up to 18-cores but we won’t see them until September or October, and prices there hit as high as $1999. Along with the high-end desktop announcement of Threadripper, AMD also revealed some details on the Ryzen 3 processor SKUs that will offer direct competition to the high volume Intel Core i3 family. The Ryzen 3 1200 will have 4-cores, 4-threads and a clock speed running up to 3.4 GHz while the Ryzen 3 1300X is 4-core/4-thread and a 3.7 GHz peak frequency. The advantages that AMD offers here remain in line with entire Ryzen family – higher core counts than Intel at the same or better price. The Core i3 line from Intel runs 2-core/4-thread configurations, so Ryzen 3 should offer noticeably better multi-threaded performance with four “true cores” at work. No pricing information is available yet, but the parts should be on store shelves July 27th, so we will know soon.
In the span of just five months, AMD has gone from a distant competitor in the CPU space to a significant player with aggressive, high-performance products positioned to target market share growth. The release of Threadripper will spike the core-count race in consumer devices, enabling further development for high-performance computing but also gives AMD an avenue for higher-margin ASPs and the “halo product effect” that attracts enthusiasts and impacts the buying decisions for all product families below it. AMD has a long way to go to get back to where it was in 2006 but the team has built a combination of technology and products that might get it there.
There have been numerous columns by thought leaders over the past couple of weeks commemorating the 10th anniversary of the introduction of the iPhone and its impact on consumers, businesses, tech, and various industries. As sort of a companion piece, I’ve been thinking about some other momentous changes in tech over the past decade…as well as some areas we thought we’d be further along.
What Has Changed in a Big Way.
A Computer In Your Pocket. If we knew then what we know now, we should have called smartphones ‘pocket computers’ or some alternative moniker. It’s a phone, camera, media player, navigation device, e-reader, gaming device, health/fitness tracker, etc. – all in your pocket, doing much of what a PC can do, and in many instances more easily, quickly, and nimbly.
Cloud Everything. The other huge growth space over the past ten years has been cloud. It has had an enormous impact on the enterprise market. But one way to think of how cloud has changed things in a big way for the everyday consumer is the fact that if your laptop breaks or you get that dreaded ‘blue screen of death’…it is not ‘cataclysmic’ in the same way it used to be. If e-mail, documents, and media are stored in the cloud (as they should be), the PC has become the device to create content and access your (cloud stored) content.
Broadband Ubiquity. We are now accustomed to having broadband nearly everywhere. It might still be in vogue to complain about the cable company or the relative monopoly in fixed broadband, but broadband speeds have improved steadily, and prices have remained stable. In wireless, LTE is ubiquitous and fast in most places, and data plans allow for nearly unlimited consumption, with some safeguards.
Massive Improvements in Voice Recognition. AI and services such as Siri, Alexa, Google Assistant, Cortana, and so on would not be possible without the significant improvement in voice recognition. It’s gone from barely usable in voice response systems to working with fairly high accuracy, enabling a new wave of apps, devices, and ways to communicate.
Rise of Social Networking. Facebook, Twitter, Snapchat, and LinkedIn have become a huge part of our communications, and content creation/consumption fabric. Each of them with its own purpose, and some used by certain market segments more than others.
Content Creation and Consumption. Three big changes here. One, is the proliferation of content sources and channels, (Netflix, Hulu, YouTube, etc.), with the full continuum of content quality, from fantastic to awful. Second, is the how we consume content, with the expectation that all content is available on-demand. And third, the growth of user-generated content, enabled by digital technology and smartphone video cameras.
What Hasn’t Really Changed.
This list applies to aspects of mainstream tech that haven’t really changed or changed less than we would have thought by now.
Still Using PCs. Smartphones are ubiquitous and tablet growth has plateaued. But the ‘post-PC’ era has not materialized as some predicted during the peak of the 2010-14 tablet boom. Most consumers and business professionals still mainly use a PC or laptop as their default device. There’s lots of experimentation with hybrid devices, Chromebooks, and the like, but PCs look like they’re here to stay, in some shape or form, for the next several years.
The E-Mail Experience Has Not Changed. Texting and enterprise messaging solutions such as Slack have proliferated. Voicemail is hardly used by anyone under the age of 30. But e-mail is still the mainstream form of messaging, especially for professional purposes. And nobody has yet found the formula for making e-mail more manageable. There’s some AI, and features at the margins, but most of us still wade through our email (and more and more spam) in the same way we did 10 years ago.
Digital Wallet/Mobile Payments Not Mainstream. Square, Venmo, Apple Pay, and Samsung Pay are all great, in that the technology is there and the user experience is good. But the digital wallet and mobile payments are still not mainstream enough that you really can leave your wallet at home. This is an industry problem, not a technology problem.
TV Hasn’t Really Changed. There has been growth in cord cutting, skinny bundles, and ‘sticks’ to enable access to Netflix, Amazon, and the like. But if anything, the experience of watching TV has become more complex, and each ‘skinny bundle’ offering comes with a major ‘asterisk’ (i.e. no local channels, sports, CBS, etc). This category is still in the ‘going to get worse before it gets better’ phase, like living through a house renovation.
Broadband Remains a World of Haves and Have Nots. Above, I mentioned how fixed and wireless broadband speeds for the average urban/suburban customer have continually improved. But for the 15% of U.S. households, and a couple of billion users globally, fast broadband remains elusive. There’s lots of work going on in this area, with folks from Google to Qualcomm to Facebook exploring creative solutions to spread broadband to poorly served areas. But this is tough slogging, and will remain one of tech’s biggest challenges over the next ten years.
Electronic Medical Records. With smartphones, apps, and digitization of nearly everything, it’s still surprising to me that most doctor visits involve filling out some poorly photocopied form, with one’s personal info and medical history, by hand. Tons of money has been thrown at this, and there’s been some change, but it’s really in pockets.
As you all can imagine, augmented reality has been given a speed boost thanks to Apple. Apple’s release of ARkit has a number of their competitors scrambling to come up with a competitive strategy on a timeline that will keep them from falling too far behind. This is going to be a challenge, as Android is going to struggle as an AR development platform. Understanding this point is central to the argument I am going to make for Apple/iOS in China.
In what was one of the most packed WWDC keynotes in recent memory, the Apple Watch got under 15 minutes of stage time, and health and fitness features got only a fraction of that. But that’s not really indicative of all the additions to Apple’s health, fitness, and broader wellness features being made this year, and it’s certainly not indicative of Apple’s commitment to the space. I spent some time this week getting briefings about both what’s new in Apple’s own software, and what developers and others are bringing to the party.
Four Key Domains in Health
Apple’s focus in health, fitness, and wellness is clear from the moment you open its Health app – it highlights four key domains for which the app can track data:
Apple’s approach to everything it does has always featured hardware, software, and services, and Health is no exception. But in this area, perhaps to a greater degree than elsewhere, Apple relies on third parties, with much of the heavy lifting in three of the four domains listed above being done by outsiders, and Apple focusing mostly on the Activity area with its own products and features. Those third-party contributions are, of course, enabled by tools provided by Apple, mostly in the form of SDKs and APIs which outsiders can use to build software and integrate into Apple’s various systems.
An Increasingly Comprehensive Play in Medical Too
That’s especially true in a fifth domain, which isn’t as visible in that home tab of the Health app, but is nonetheless important to Apple’s efforts in this area, and I’ve called that medical for the sake of distinguishing it from the other domains. The table below illustrates the roles played by Apple’s own first party products, its tools for third parties, and then the products and services provided by third parties in all this work, with areas that will change or have augmented features or functionality this year highlighted in red:
As you can see, at this point the combination of Apple’s own products and features and those provided by third parties is pretty comprehensive at this point, and you can hopefully also see the number of areas where new features in watchOS 4 are enabling new functionality either in the built-in apps and hardware or through third parties.
What’s New This Year
I want to drill down briefly on some of the things that are new this year, because they got such short shrift at WWDC but some of them are pretty notable. Here they are in bullet point form:
Enhancements to the Workouts app: following the new hardware from last year which introduced GPS and water resistance to the Watch, watchOS 4 adds additional functionality, including more sophisticated tracking of swimming workouts, optimized tracking for High Intensity Interval Training (HIIT), easier switching between workout types and general usability improvements.
Changes to the Activity and Breathe apps: each of these apps is getting some subtler changes, with the Activity app getting some smarter coaching which is more personalized than the current more generic reminders and prompts, with some really clever stuff coming here; and the Breathe app getting better explanations of why you might want to take a break and do some deep breathing, in the form of described health benefits.
GymKit and connected fitness machines: I saw a demo of the new integration with fitness machines, and this is going to be a big deal for anyone who does gym workouts, which the Watch naturally can’t track as well using GPS or motion sensors. The integration here is very clever, with data flowing both ways, meaning that data from the Watch can show up on the much larger, always-on screen on a treadmill or exercise bike alongside the data it captures itself. I think there’s an opportunity here for deeper integration with iOS devices to replace the corded connection some machines offer today, for things like projecting notifications or videos onto the gym machine, but the fitness-focused integration Apple is starting with here is a great start. Getting these machines quickly adopted in gyms will be the key, and I understand Apple will be talking more about how this is going to happen later in the year.
Core bluetooth on the Watch for more devices: Apple has enabled core bluetooth on the Watch for heart monitoring chest straps from the beginning, but not for a broader range of devices. In watchOS 4, it’s opening up core bluetooth to other devices too, enabling other body sensors and even medical devices like continuous glucose monitors, as well as connected fitness equipment like tennis racquets or golf clubs.
The Medical Domain is Coming Into its Own
Beyond the things Apple itself is going, I got to see quite a few apps and devices from third parties in my meetings this week, and one of the things that impressed me most was the innovation being done in what I labeled the medical domain above. Between the original HealthKit and the additions since in ResearchKit and CareKit, Apple is enabling some really interesting work by doctors, device vendors, and medical facilities which leverages Apple devices to do things that would otherwise have been impossible or a lot more difficult. Some examples include:
The Propeller Health asthma inhaler sensor and accompanying app, which automatically track when the inhaler is used and also invite users to track their symptoms and environmental conditions manually, pulling in third party data. The solution is designed to help increase “adherence” (the faithfulness with which patients adhere to a treatment plan such as using a preventative inhaler daily) and understanding of what triggers symptoms for better management. These products have been available through medical professionals but are now going direct to consumers as well.
A WebMD Pregnancy app which includes a ResearchKit component that is allowing researchers to learn a lot about pregnant women and their symptoms, which are surprisingly under-researched, especially in remote and rural areas far away from where most studies are conducted. The app is going to be most useful with women who have access to regular blood pressure and other vital signs measurement, but is already generating a much higher participation rate from rural areas.
A Sharp Health app for helping eye surgery patients get ready for and recover from their procedures. The app helps ensure that patients don’t have surgeries canceled because they forget about necessary steps to take beforehand like stopping blood thinners, and again helps with adherence after surgery, as well as giving them options for talking to medical professionals if they have questions during their recovery. The basic model here could certainly be applied to other procedures by other health systems too.
More Work to be Done
As I said earlier, it’s starting to feel like Apple has an increasingly comprehensive set of hardware, apps, tools for third parties and a growing ecosystem of apps and devices from others in this health domain. But in quite a few of these areas, it feels like we’re still just scratching the surface of what can be done, especially in the medical field, where things still tend to move very slowly and where comprehensive electronic patient records are still more of a dream than a reality. But Apple is helping here by providing tools that professionals and companies with the appropriate medical pedigrees and qualifications can tap into, while focusing on what it does best.
One question I had for the Apple folks I talked to was how it decides which domains to play in itself versus leaving them to third parties – for example, it’s added some sleep-related functionality such as the Bedtime feature, but still doesn’t do its own sleep tracking, and doesn’t really have much of a first-party play in nutrition tracking either. The answer I got was the classic Apple one: Apple tends to participate directly in a market only where it feels like it can do something unique and different. For now, that means there are plenty of areas where others are better qualified and equipped to make a difference and provide the features and functionality users need. Discovery of these in the App Store and elsewhere is going to be key for enabling users of Apple’s ecosystem to make the most of all this, and that’s an area where the App Store changes Apple announced at WWDC should help.
Apple is never going to be done in this area, and neither are its partners or its competitors. There’s lots of work still to be done by all these players in a field that I suspect is going to receive increasing attention from the tech industry over the coming years, even as politicians argue over the best ways to manage the funding of healthcare and the structure of insurance plans that will pay for much of this. I’m hopeful that we’ll see much faster change and greater benefits coming on the technology side, and this week I saw promising signs in that direction.
If you talk to a lot of the engineers and dreamers in Silicon Valley, especially ones over 35, they will most likely tell you that they are fans of science fiction. This genre of movies, comic books and books were huge in the first half of the last century and remained somewhat steady in the latter part of the last century when most of the engineers of today were born.
I have been thinking about writing something on women in tech and what we have been witnessing over the past few months and I had resisted thus far. Last week, however, as I was a guest on the DownloadFM podcast I was asked my opinion about the many stories we have read in the press concerning childish CEO behavior and continued allegations of sexual harassment, starting from Uber to 500 Startups, and I could no longer shy away. After all, I am in tech, I am a woman, and I have an opinion on the topic.
We expect more from Men in Tech
Women face discrimination, chauvinism, and harassment in pretty much any business they are in. For some reason, however, I think the disbelief around some of the stories that have emerged in tech comes from assuming that men in tech would be different, evolved, better. Better than the men who run Wall Street and I better than the men on Capitol Hill. That hope is buoyed by the fact that men in tech are by and large well-educated, well-travelled, they are entrusted with building our future. Men in tech are also by and large white and entitled and often with poor social skills when it comes to women. Of course, there are exceptions, but they are, alas, exceptions.
You start to believe it is You, not Them
I have been a tech analyst for 17 years, and while I have seen more women in tech, I still get excited when there is a line for the ladies’ bathroom at a tech conference. I still pay attention to how long it takes for a woman to be on stage at those tech conferences. And while it seems that all the big corporations have increased the number of women on stage, if you pay attention, you notice, that most of those women on stage are performing demos and they are not upper management.
When I got pregnant with my daughter, female as well as male colleagues, told me that my priorities would change and I would not work as hard. I was expecting it from my male colleagues, but it was disappointing to hear from my fellow female colleagues that it was expected of me to want to do less. The implication of course, if I did not feel that way, was that I was a bad mother.
In many occasions, I was told I was emotional; I was asked if it was that time of the month; I was told to grow a pair. In meetings, I have been interrupted and talked over by endless male colleagues, mistaken for my colleague’s secretary and right out ignored after making the mistake to serve coffee to meeting guests. At the start of the smartphone market, I was handed over pink phones with a lipstick mirror. I’d love to ask Walt Mossberg if he ever reviewed one of those! On Twitter for complementing an actor’s launch of a tech product, I was told I was “throwing my knickers” at him. I have been the token woman on tech panels, and I was invited as a guest on a radio show because “the audience responds better to women talking tech.” And the list goes on.
Things like this happen all the time to many women. They happen so often that you start to think it is the norm, or that you are reading it wrong and taking it personally. Whether you think it is wrong or not becomes irrelevant though when you consider how hard you worked to get to where you are and how much further you want to go. So, you ignore it, you smile, and move on. You do what Irish reporter Caitriona Perry did in the Oval Office a few weeks ago.
Avoiding Discrimination 3.0
If things have not changed up to now why is it important that they do? Why does it matter so much that men in tech must understand enough is enough? Because what is going to happen when everybody in the room looks alike and behaves the same way? And of course, this applies to gender as well as race, religion, politics.
We are at a time when we are training machines to think like us. What a scary thought when it comes to women in business. What will happen when machines consider physical and psychological traits based on the beliefs that dominate society today? What if men, who claim they did not know it is not normal to make advances in work situations train computers to think it is normal too? Will women be negated roles a priory based on the belief that “it’s much more likely to be more talking” if too many women are part of the board? Are we really building a better society if we move from paying a woman by the hour for sexual favors to buying an AI-enabled doll that will respond to its master just the way a male engineer has designed it? What will happen if self-driving cars are taught that a woman is more dispensable than a man when it comes to life and death situations?
We can rejoice at having female emojis with more professions and we should. We should continue to foster STEM among female students but know that just because they can do the job it does not mean they will be given the opportunity to do it. Let’s lean on the strong female role models we have. Let’s be supportive. Let’s have each other’s back. A smart woman said recently that we should not just be happy to be in the room where it happens. We should be sitting at the table and make it happen. So, let’s do that, let’s stop thinking it is us, let’s stop thinking it is normal and let’s get a seat at the table.
One of the most underappreciated aspects of making hardware is supply chain management. Many good startup ideas died due to failures, mostly by inexperienced entrepreneurs, trying to make a hardware product and failing miserably with supply chain. Often times, entrepreneurs can make a decent prototype, or product demo, but making that prototype into a design that can be manufactured at scale, cost effectively, is a rare trait at many companies in the market today.
Apple’s ARkit has been getting a lot of attention and for good reason. In fact, if you want to keep a close eye on all the really interesting, cool, and innovative things developers are trying already with ARkit, follow this Twitter account called Made with ARkit. This account has been finding, and tweeting, a number of really interesting developer applications and experimentations as they get to know Apple’s ARkit and experiment. There have been two observations around Apple’s ARkit that have stood out to me.
The first is the extremely high praise it has been getting even from many of Apple’s harshest critics. I’ve seen folks who have been very vocal, with large following on Twitter, about their dissatisfaction with Apple’s tools, new tools and OS features, and more, sing Apple’s praise with ARKit. It appears consensus not just from developers but even from Apple’s harshest technical and developer critics is that Apple nailed ARkit and knocked this one out of the park.
The second observation is how excited many developers seem to be over ARkit and the tools themselves. I can’t keep track of the number of tweets from developers remarking on how easy, quick, and natural to their existing development workflow, adding ARkit features to their apps has been. It has only been a few weeks and already a number of big and small developers have been going full speed to bring a plethora of apps which include experiences never seen before to the iPhone this fall.
As much as we have been studying and tracking augmented reality, even I did not expect it would move this fast. Not just with developers embracing it but with incredibly compelling use cases we are seeing them build into their apps that I can actually see real humans using and finding value with. We are, no doubt, about to see a whole new app development era that turns our phones into windows to view and engage with digital objects and the physical world at the same time.
It should come as no surprise that this new app development era will take place on the smartphone first. This is a device everyone already has. Includes the core technology from processing, GPU, image sensors and more. These app experiences have always felt like a natural extension to existing apps for things like commerce, travel, education, games, etc. The key has always been getting a critical mass of developers to take hold of the technology, make it their own, and experiment in new ways. This is what we are on the cusp of watching happen.
It should also come as no surprise this AR rush of apps and development will happen on iOS first, and in some cases there will be many iOS only AR experiences. Apple has not only the best and most valuable customers willing to buy and engage with these new apps but also the most robust and creative developer community of any platform. Apple also has a unique hardware advantage in their control over hardware fragmentation. Apple can guarantee developers a smaller set of hardware variables to develop for in things like CPU/GPU/camera and camera sensor, which allows a much larger active installed base of devices to take advantage of their innovations. ARkit apps will be supported on all A9 and A10 devices. This gives developers hundreds of millions of devices day one that can access their apps. No other platform on the planet can offer this massive reach to developers. You can be sure they will maximize it in every way possible.
I also feel consumers are ready as I pointed out in this article how AR app and experiences will sneak up on people. This goes beyond something as simple as Pokemon Go, but strikes at the deeper reality that many people experience some form of augmented reality today and just don’t realize it. Interestingly, I came across a survey which asked people the types of apps they have used in the last 30 days and 3.9% said an Augmented Reality app. Even if we just looked at Snapchat and Pokemon Go alone that number would be significantly larger than 3.9% but it makes my point that most people don’t realize or associate it as augmented reality. I also don’t think consumers are going to go hunt for AR apps on the basis of AR itself. Rather, I think people will discover new things their apps can do and AR will add value to many existing experiences they have today. Of course, there will be some new games, and app types, but it will be the app experience and the feature AR enables that drives consumers to want to download it not because it classifies as “augmented reality.”
Lastly, and I thought this was a controversial opinion until I tweeted it and a bunch of folks responded to me saying it wasn’t, but I think Android is really going to struggle with augmented reality for a while. We can use Project Tango as an example, but I am not surprised developers didn’t embrace that en masse. What is going to hit Android hard here is the hardware fragmentation that exists on the platform. It is going to take a few years, at least, for Android devices to have a critical mass of hardware capable of even remotely decent AR experiences. This means developers won’t waste resources on the platform for some time since they won’t have a large enough potential customer base to go after. Big apps like Yelp, Facebook, Snapchat, etc., will bring these features to their app but many devices won’t support the more cutting edge features. The bottom line is, Android’s tough development environment because of the thousands of device and hardware configuarations is going to make it tough as an AR platform. This opens the door for Microsoft in some regards for Windows as a second platform for AR development, but minus a mobile phone OS, that could be limited to more semi-mobile or fixed AR experiences. The point remains for Microsoft, they will be a more developer friendly platform for AR than Android in the short term so it will interesting to see how they move that forward and attract software developers for Windows mixed reality/augmented reality experiences.
This past week, I came across some data from a service called BuzzAngle, which provides something of a unique perspective on the US music market. The service reports total numbers for physical and digital song and album sales and on-demand audio and video streaming. I was struck by a couple of the numbers in its latest report, covering the first half of 2017, and decided to dig a little deeper and chart some of the longer-term trends, which I’m going to share here today.
One of the most fascinating areas of research we get to do these days is to look at the technology behind self-driving cars and try and make sense of this new thrust in automated vehicles.
Like most of us researching this field, we now believe that self-driving cars will, over time, drastically reshape the way we use automobiles and move more and more people to either some type of ride-hailing transportation model or actual ownership of a self-driving car themselves.
Although this transition may take as many as 20-25 years to move the majority of people to using these types automated vehicles for their personal transportation, it really is just a matter of time before this happens.
At the moment, this concept is pretty radical to most people and most are highly reluctant to turn over the driving of a car they are in to an automated robot driver today. Of course, self-driving cars are not actually ready for prime time even if the technology to deliver a self-driving car is on the horizon. Most car companies believe they can have fleets of vehicles ready for many major cities to use in an on-call fleet model by 2020-2021 in which a person can just call up a self-driving car at will and it picks them up and takes them to their destination. The reality is this is only 3-4 years away. And they also tell me that people will be able to buy fully automated vehicles for their own use by as early as 2022-2024.
Like most people who have had control of the wheel of a car all of their lives, I too was reluctant to go on a test drive in a self-driving car to experience what it is like to understand not only how this works but also to get a grasp of its ultimate potential. The opportunity recently came up for me to do this type of test drive as part of my work with the State of Hawaii and their current Governor, David Ige. I first got involved with helping Hawaii in the late 1990’s when then Governor, Benjamin Cayateno, asked me to help work on a program to entice tech companies to Hawaii. Under his leadership, Hawaii passed a special law to give tax incentives to tech companies who would set up offices in the Islands with the hope of getting more IT students from the islands employed at home instead of having them go to the mainland for jobs. The program was only mildly successful but unfortunately did not meet the real objectives they had hoped for it.
During that time I got to meet and work with David Ige, who was a State Senator at the time and as an electrical engineer, was very helpful in getting this bill passed. He is now the Governor of Hawaii and over the years he and I have had various conversations about what is hot in tech and Silicon Valley. Since he has become governor, at least once a year I visit him at his office to talk about the world of technology and things that I believe will impact the State of Hawaii. In my meeting with him last March, I shared with him what was happening in the area of self-driving cars, something that he and his transportation folks were already looking at closely. During our talk I suggested that the next time he came out to Silicon Valley he and I visit some of the major players creating the brains behind self-driving cars as well as Google, who is a major player in promoting and creating self-driving car technology and designs.
So in early April, during a scheduled trip to San Francisco, he carved out an afternoon and he and I went to visit Nvidia and Google to get an update on where things are in automated vehicles. My key objective for the Governor was to give him a better idea of what was happening now in this area and get him thinking about creating a plan for the State of Hawaii to allow for testing of self-driving cars soon as well as start to work on what will eventually be state and local laws needed to govern self-driving cars in the Hawaiian Islands.
It was during our visit with Google’s Waymo group that he and I were given a test drive in a Waymo vehicle and got a chance to experience a self-driving car in action. This was fascinating and enlightening and made it clear to us that the technology to deliver automated vehicles is much closer to reality than many believe. In our test drive, there was a person in the driver’s seat who just pushed the button to start the car and set it in motion. They had put in all of the driving details before we got to the car and once started, the car took off on the designated route. During that time the driver never touched the steering wheel, brakes or accelerator and the car drove and navigated every street light accurately, stopped for pedestrians in cross walks automatically and even stopped quickly when a cyclist cut in front of us.
In the right seat was another person who had a laptop that was showing us what the car was seeing. The view they showed us was what the cameras and sensors saw, how the car was using these tools to navigate the road ahead and made it clear that this vehicle was pretty much all seeing and all knowledgable, sensing every line, stoplight and moving object in a 360 degree radius as we cruised the streets of Mt. View, CA.
Taking a test drive in a self-driving vehicle and seeing not only how it works but also how flawless the technology behind it performed, more than convinced me that the technology itself is ready to deliver on the promise of an automated vehicle sooner than later.
But it also made me understand that besides the regulatory issues that have to be solved on Federal, State and city level, as well as many other things that have to be done at the technology level before we get these types of self-driving vehicles on our streets, convincing people to trust a self-driving car to ferry them around may be a tough sell. I received my license when I was 16 years old and have driven cars and motorcycles since then. They present a very familiar way of transportation for me, and after decades of practice, I consider my self an accomplished driver. I suspect that for most people over 30, driving has become second nature and being in control is something that we like from our driving experience.
Of course, the fact that we can’t control the actions of others is why self-driving vehicles make so much sense. As I saw in the Waymo example, the technology employed in an automated vehicle has 360 degrees of sight as well as sensors that could anticipate the cyclist I mentioned above and stop way in advance before hitting this person. In essence, this automated car is much smarter than a driver and can act even faster with greater knowledge of the cars surroundings and respond quickly to almost all situations it encounters.
I still believe that it will take a lot of convincing before most drivers are willing to give up control of their vehicles and fully trust a self-driving car. If you are a technology, early adopter, as I am then perhaps you will be willing to jump in a self-driving vehicle and let it take you away. In fact, it will be the early adopters who will be the first to let a self-driving car serve as their robot chauffeur initially. For some seniors and those with issues that keep them from driving, a self-driving car would be a godsend at any age to give them the flexibility to go anywhere they want once these types of cars hit the road.
However, even with these cars being able to be on the road and in fleet service by 2020 and available to purchase by 2022-2024, I think it may take as many as another 10-20+ years before we see what we call a more mass market for self-driving vehicles. Even though the technology will be ready, I sense that it is going to take the public much more time to come to trust these automated vehicles before they take what will be a leap of faith and trust them to cart them around safely.
I’ve been testing Apple’s latest iPad Pro 10.5-inch tablet. It’s a very good piece of hardware, and when iOS 11 moves from beta to full release later this year, the software will represent a significant leap forward, too. With the launch of this product, Apple jumpstarted the debate about whether an iPad can replace a Mac or PC. But as good as the iPad Pro is, I can’t help but think that all the hand-wringing about tablets versus notebooks is just misplaced angst. Today’s users have already chosen their platform, and future generations will likely choose neither, opting instead for increasingly powerful smartphones that will usher in brand new ways of computing.
iPad Pro: Accelerated Iteration Apple launched the first iPad in 2010, and just seven years later this new iPad Pro represents a stunning amount of product evolution. The A10X Fusion chip offers processing power on par with some PC CPUs. The 10.5-inch screen includes new True Tone and ProMotion technologies. The first calibrates the screen’s colors based on the ambient light conditions. The second ramps up or down the screen refresh rate based on the content and also makes using the optional Apple Pencil feel even more natural than before. The optional Smart Keyboard case makes it possible to bang through typing chores much faster then tapping on glass. As with previous iterations, the iPad Pro continues to offer plenty of battery life, and none of these new features diminish that. My unit includes LTE, which means unlike every PC or Mac I’ve ever owned, the iPad Pro is always connected.
The new features coming in iOS 11 are too numerous to list, but many of them are focused on making the iPad more productive. I’m running the public beta, and capabilities such as a viewable file system, support for drag and drop, and improved multitasking mean that I can accomplish more things than ever before on the iPad. But for me, it still can’t replace my notebook.
I’m a long-time tablet fan, and I use an iPad every day, usually after work, to consume content. I’m hooked, and will likely use a tablet for the rest of my days in some capacity. There are clearly many others like me, but I do wonder if the confluence of events back in 2010/2011 that caused many of us to pick up a tablet—namely a PC market that saw innovation slow to a crawl and a smartphone market made up of products with sub 4-inch screens—have now passed. With both of these challenges now addressed, where does this leave the tablet in terms of grabbing new users?
What’s Next? Apple CEO Tim Cook has long argued that an iPad is the best computer for most people because it is less complex and therefore easier to use than even a Mac. It’s a compelling idea, but one whose window of opportunity may have already closed. Many long-time PC and Mac users may love their iPads for consuming content, but ultimately even the new iOS 11 represents too many restrictions for people who have lived with the freedom a full desktop OS. For these folks, the tablet is additive at best. And in emerging markets where the PC isn’t well-entrenched, people have already chosen the smartphone as their primary computing device. It has a large screen, plenty of compute power, and it’s always connected.
Ultimately, I expect the smartphone—or some future iteration of it—to replace both the PC and the tablet. In the near term that means phones will likely take on more desktop-like capabilities when needed, but longer term it means more fundamental changes. Eventually, augmented reality technologies will mean we’re no longer tapping on glass or staring at 5-, 10-, or 15-inch screens. Some think standalone AR devices will replace smartphones, but I tend to think the smartphone will still power most of these experiences. In fact, I’d argue that at some point smartphone screens may even begin to shrink, before they disappear altogether, replaced by accessories worn on the wrist, ears, and eyes, that serve up a wide range of augmented operating environments and experiences. In fact, Apple will likely lead this charge with iterations of today’s Apple Watch, AirPods, and its long-rumored glasses.
Of course, not everyone will be interested in embracing the smartphone as the one device to rule them all. Which means there will still be a place in the market for notebooks and tablets for years to come. And so Apple’s focus on making the iPad more capable today certainly isn’t a wasted effort. However, it is hard not to see the smartphone as the ultimate computing platform of the future.
I came across an interesting research report from a private note from UBS research. This study looks at the US market and goes into detail on what is happening with traditional Pay TV from a company like Comcast, Dish, DirecTV, etc., and streaming services like Hulu, Sling TV, YouTubeTV, etc. While I can’t share the whole report, I want to dive into two charts from the study.
With reports this week that Samsung is readying a Bixby-powered voice speaker for the home, and an announcement from Alibaba that its entry in the category will be launching next month, it feels as though we’re reaching a tipping point in the market pioneered by the Amazon Echo. It seems as though pretty soon every major platform and device vendor will have an entrant in the market, signaling a new phase in its development. But this market isn’t quite like other markets that have gone before.
A Tipping Point in Voice Speakers
Amazon, which arguably created the voice speaker market with its Echo device in late 2014, had the market largely to itself for a good two years. Then, Google entered the market with its Home device late last year, and this year saw a slew of announcements at CES, mostly of Amazon Alexa-powered speakers, with an announcement last month by Apple and this week by Alibaba, among others. Things certainly seem to be picking up steam, as the diagram below shows:
Apple’s HomePod should be with us later this year, while Tencent has said it’s working on something in this space, Lenovo’s Smart Assistant was announced at CES but hasn’t become available yet, multiple speakers from Microsoft partners including HP and Harman Kardon are on the way, and Samsung is reportedly working on a speaker powered by its Bixby voice interface. On top of all those, there are quite a few others from smaller companies.
A Different Kind of Market
Most markets in consumer technology go through multiple phases, often pioneered by one or two companies who prove out the opportunity, followed by a rushing in of new players as the opportunity becomes obvious to others, and an eventual thinning and consolidation of the market as the winners begin to emerge. In the last few years, the rushing in phase has been characterized by an influx of low-cost Chinese competitors in markets as diverse as smartwatches, drones, virtual reality headsets, fitness trackers, and more.
That hasn’t really happened in quite the same way in the voice speaker market, for one obvious reason: this isn’t just another hardware category where free, off-the-shelf software gets you an instant global presence. Even though Amazon has opened up its Alexa platform for others, and we’ve seen a number of not just speakers but other devices launched which incorporate it, that platform is still fairly severely geographically limited. The Alexa Voice Service which device vendors can use is so far only available for the UK, US, and Germany. And of course Amazon as a brand may be present in many markets, but is only really popular in less than a dozen countries worldwide.
The Google Assistant so far only works in English, though support for other languages is coming shortly, but even once those roll out much of the world will be left without a voice assistant platform that speaks its language. Apple’s Siri, at least on iOS, supports many more languages, but it’s not yet clear which HomePod will support, and of course Siri isn’t a licensable platform.
Localization Beyond Language
But language isn’t the only localization challenge with voice assistants. These assistants need to understand local accents and idioms, know the right conversions for locally-used measurements, be familiar with television shows, movie stars, and sports figures in each country, and so on. And they need to integrate with relevant local entertainment, information, and other services. That makes expanding into other markets particularly challenging, and it’s yet another reason why most successful voice assistants will be part of broader ecosystems coming from big companies like Amazon, Google, Microsoft, and Apple.
However, that means the broader opportunity for voice speakers is nothing like as large as for other recently hot consumer electronics categories, with the long tail of cheap Chinese vendors in particular likely to remain largely absent. It’s possible that, with the entry of players like Alibaba into voice speakers and Tencent and Baidu into voice assistants, we’ll eventually see some expansion into lower-cost tiers. But this is likely to remain a highly regionalized market, to a far greater extent than any other recent consumer electronics category.
That’s important because there’s already a false narrative around a global market in voice speakers. Several of the news outlets that covered Alibaba’s announcement this week said it was a competitor to Google Home and Amazon Echo, but of course since those devices don’t work in China and Alibaba’s won’t work outside China, they’ll never actually go head to head.
Business Models Will Vary Too
The other interesting thing about the voice speaker market is that, for at least some of the players, it will be a means to an end rather than a lucrative business in its own right. It’s already clear, for example, that Amazon sees the Echo family and the Alexa platform as an opportunity to sell more stuff on Amazon.com, while Google plans to use advertising to create additional revenue streams on the somewhat cheaper Home. Apple, meanwhile, will take its usual tack of monetizing the complete package of hardware and software, though it will likely see some uplift in services like Apple Music off the back of HomePod sales as well.
In China, meanwhile, we’ll likely see these and other business models play out, with Alibaba’s device named after one of its popular online stores, and Baidu’s and probably Tencent’s efforts likely to be more ad-focused. All of this will lead to different pricing strategies for the hardware itself, with the early Chinese examples hitting price points roughly half those of the two early leaders in the US market, and Apple in turn pricing its premium speaker at roughly double those devices.
This is going to continue to be a fascinating market to watch unfold, one that won’t necessarily follow any of the established patterns from other recent hot devices. It will be more regionalized – even balkanized – and more varied in the business models than other device categories. And as a result we’ll likely see several major players taking leading positions in different regions around the world, rather than global winners as in smartphones, tablets, or PCs. Over time, we’ll certainly see the usual thinning and consolidation as some winners do emerge and smaller players fail to gain traction, but in the meantime it feels like we’re going to see lots more new entrants and interesting devices and business models.
Late last month I wrote about the 5 major industries impacted by the iPhone. I listed the PC, Telecom, Music, TV and Health industries that the iPhone helped change and in almost all of these cases it even forced them to change their business models.
What a difference a year makes! Usually a statement we can make when looking at technology adoption. Either because in a year a technology is history or because it has become a vital part of our life. Sadly, when it comes to digital assistants and the interactions consumers are having with them, a year has not made much of a difference at all.
In June 2017, we at Creative Strategies surveyed 1100 US consumers between the age of 18 and 65 and found that penetration when it comes to digital assistants has not grown since our previous survey conducted in May 2016: only 66% of consumers are using one. Among the users 63% use Apple’s Siri regularly, 23% use Google Assistant, and 10% uses Amazon’s Alexa. As to be expected, usage is proportional to the technology savviness of the users so among early tech adopters usage is at 96% and among early mainstream users, it reaches 76%.
The Chicken and Egg Problem
The industry is obsessed about determining who is ahead in artificial intelligence and whose assistant is smarter. Consumers, however, do not seem to be asking much of today’s digital assistants.
Alexa reached 15,000 skills just the other day, and Google Assistant and Siri have been growing in the range of tasks they can perform. Consumers are turning to them to ask the same things as they did last year: searching the internet, setting alarms, playing songs, asking directions and checking the news. What is encouraging, however, is that while searching the internet is still the primary task, all the others have grown in popularity compared to a year ago showing that consumer confidence might be growing.
When I start to dissect the data, however, I do wonder if consumers play it safe and ask each assistant what is a core competence of the supplier or fits the primary use case. So, Google Assistant is used for search and navigation, Alexa is asked to set alarms and play songs, and Siri is asked for directions and to call someone or set a timer.
Still being at the early stages of these relationships, looking for something that we know will end up in a positive exchange is natural. It is also good for confidence building to ask for tasks that we are confident our assistant will get right. It does, however, raise the question of how this is impacting new feature/skill discovery and adoption in the long run and consequently how it will impact the value that brands will see in the return of investment they are making in digital assistants.
The Value Proposition Problem
Consumers who are not using a digital assistant say that they are either more comfortable typing (33% – rejecting voice-first rather than the assistant) or they said they tried a few times but had to repeat themselves (20%) and did not use it again. Force of habit is also damaging digital assistants, as 19% of consumers forget they can use voice even if they know they can. Interestingly, some consumers also think we are getting too lazy while some say they cannot get themselves to actually learn how to use a digital assistant.
Consumers who do use digital assistants regularly seems to be pretty satisfied with their performance with Google Assistant showing the highest percentage of very satisfied users.
Siri users are trailing both Google Assistant and Amazon Alexa when it comes to very satisfied users which to me speaks to why Apple’s decision to make HomePod about music and sound first was the right thing to do.
I have discussed before how I expect Apple to pivot with Siri when HomePod gets to market. We can debate whether or not Siri is lagging in capability compared to Google Assistant and Alexa. What matters is that non-users might think so and current users are not as satisfied as they could be. Apple coming out at WWDC and promising a better Siri for HomePod would not have helped the situation.
28% of consumers we surveyed are interested in a HomePod, and another 16% said they are planning to buy one. These numbers go up considerably with early adopters where 60% say they are very interested and 42% are planning to buy one.
46% of current Siri users are interested in HomePod, and another 29% are planning to buy. Even across consumers in our survey who were unsatisfied with Siri, interest in HomePod is as high as 43% with another 30% who are planning to buy. What grows in this segment is the need to be convinced consumers that Siri will be better. Convinced not promised! 34% said they need to be convinced that Siri has improved before they consider buying HomePod. This number declines to 17% among overall Siri users.
Price is the biggest deterrent as 17% of consumers say HomePod is too expensive. Consumers are not looking at the competition as only 4% would buy and Amazon Echo or a Google Home. Owning a competing device is also not stopping consumers from being interested in HomePod only 4% and 3% said their lack of interest in HomePod is rooted in their contentment level with Amazon Echo and Google Home.
Apple will still have to deliver on Siri, not for HomePod’s sake but for Apple’s sake but promising and delivering on what consumers can understand and easily assess is the same smart move they made with Apple Watch and Fitness. The value that the rest of the device will bring to users will be personal and will grow over time. This strategy is paying off with Watch, and it will do the same with HomePod.