How Will a Cashless Society Impact the Cash-Dependent?

This week, I was driving in my neighborhood when I spotted that most American of sights: a bunch of kids running a lemonade stand, waving signs and trying to flag down passing cars. In some ways, it seemed like a great business opportunity – the temperatures where I am have rarely dipped below the high 90s Fahrenheit lately. And yet I didn’t stop – not because I don’t like lemonade (or kids), but because I simply don’t carry cash anymore, and I’m fairly sure the neighbor children weren’t taking credit cards. That got me thinking about all the people and sectors of our economy which are still dependent on cash, and how they might be affected by our increasingly cashless society.

Cash is in Decline

Whether anecdotally or based on solid data, I think most of us have a sense that cash is in decline. One study from last year suggests that cash is the preferred payment method of just 11% of US consumers, with 75% preferring cards. In other markets such as China, cash is dying out even more quickly, with mobile payments increasingly eating into both its share and that of cards. Though my local dry cleaner in New Jersey was a rare (and suspicious) exception, I very rarely come across businesses that don’t take cards, to the extent that it now really takes me aback when it happens. For many of us these days, credit and debit cards and to a lesser extent mobile payments are making cash largely irrelevant. I still have a huge jar of loose change I accumulated over many years and which now mostly gets used for the occasional school lunch or visits from the tooth fairy, but not much else.

But not for Everyone

However, assuming that this pattern holds for everyone would be a mistake. There are still big sectors of the economy, and large groups of people, who remain heavy users of cash and heavily dependent on it, and as others move away from it, that’s increasingly going to cause them problems. Sadly, this likely applies most to some of the more vulnerable and marginalized parts of our society, who will be least in a position to make the changes necessary to keep up as the rest of society moves on.

Here are just a few examples of people or businesses still dependent on cash:

  • Homeless people and others who ask for money on the streets
  • Charity workers soliciting cash donations in public areas
  • Manual and casual laborers who get paid in cash, either for convenience or for tax reasons
  • Cab drivers
  • Those who don’t have bank accounts or credit cards, including many without regular incomes
  • The elderly
  • The very young, also unlikely to have bank accounts
  • Anybody who works based on tips, from waiters and waitresses to maids and barhops in hotels to valet parkers
  • Small local retailers and restaurants who can’t justify high credit card processing fees on mostly small purchases.

The list could go on much longer than that, but the point is that there are those who are in some cases heavily dependent on cash and relatively powerless to make the changes necessary to keep up. These are often among the poorer and least educated people in our society, and therefore those with least access to technology, the traditional banking infrastructure, or information about how to adapt.

Tech Has Offered Partial Solutions

The tech industry has offered partial solutions, but mostly in self-serving ways. Payment processing company Square has transformed many a small retailer or producer from a cash-only business to one that can take credit cards and even Apple Pay, and created ways for those without traditional cards to carry balances and make payments with their phones. Amazon has introduced methods for those who deal mostly in cash to obtain one-off or refillable cards to be used to pay for things on its site. Venmo has turned erstwhile cash transfers into electronic payments. But these solutions mostly tear down limits to the addressable markets for their own products, without necessarily expanding economic opportunity or promoting inclusion, while also often being based on internet and mobile technology not available to all.

But Needs to do More

What we need is solutions for the rest of society, and especially for those without access to the internet and phones to be able to receive non-cash payments. What about an app that allows patrons or would-be donors to set up a transaction in an app, and allows the recipient to walk into a bank or store to pick it up in cash with a privately shared code? Or an app that allows users of basic smartphones to receive payments and carry a balance without creating an ongoing relationship with the payer? What about a service that would provide meals, access to beds and other facilities, or other needed items to the homeless based on donations from smartphone users? Technology has such an enormous potential to reduce friction and make payments simpler, but what we need are innovations that do the same on the receiving end, including in ways that don’t themselves require technological solutions.

I feel like calling on the tech industry to step up to big societal problems has been something of a theme lately in my columns, but I can’t help but think that this is yet another area where those already most on the fringes of society will just be left further marginalized by technology rather than brought into the fold by it. It doesn’t need to be that way: the bright minds who have created so many technologies that help us deal with our “first world problems” can surely find ways to help those with more biting and pressing challenges as our society continues to evolve.

How will Our Screen Addiction Change?

A Nielsen Company audience report published in 2016 revealed that American adults devoted about 10 hours and 39 minutes each day to consuming media during the first quarter of 2016. This was an increase of exactly an hour recorded over the same period of 2015. Of those 10 hours, about 4½ hours a day are spent watching shows and movies.

During the same year, the Deloitte Global Mobile Consumer Survey showed that 40% of consumers check their phones within five minutes of waking up and another 30% checks them five minutes before going to sleep. On average we check our phones about 47 times a day, the number grows to 82 times if you are in the 18-24 age bracket. In aggregate, the US consumers check their phones more than 9 billion times per day.

Any way you look at it, we are totally addicted to screens of any form, size, and shape.

While communication remains the primary reason we stare at our screens there are also tasks such as reading books or documents, playing card or board games, drawing and writing that we used to perform in an analog way that are now digital. And, of course, there is content consumption. All adding up to us spending more time interacting with some form of computing device than with our fellow human beings in real life.

I see three big technology trends in development today that could shape the future of our screen addiction in very different ways: ambient computing, Virtual Reality and Augmented Reality.

Ambient Computing: the Detox

Ambient computing is the experience born from a series of devices working together to collect inputs and deliver outputs. It is a more invisible computer interaction facilitated by the many sensors that surround us and empowered by a voice-first interface. The first steps of ambient computing are seen in connected homes where wearables devices function as authentication devices to enable experiences such as turning the lights on or off or granting access to buildings or information. The combination of sensors, artificial intelligence, and big data will allow connected and smart machines to be much more proactive in delivering what we need or want. This, in turn, will reduce our requirements to access a screen to input or visualize information. Screens will become more a complement to our computing experience rather than the core of it.

In order to have a feel for how this might impact average screen time, think about what a device such as a smartwatch does today. While a screen is still involved, it is much smaller and it shows the most important and valuable information to you without drawing you into the device. I often talk about my Apple Watch as the device that helps me manage my addiction. It allows me to be on top of things, without turning each interaction into a 20-minute screen soak. Another example is the interaction you might have with Google Home or Alexa when you inquire about something. Today, for instance, I asked for the definition of “cabana” as my daughter wanted to know. I got what I needed in less than 30 seconds: “a cabin, hut, or shelter, especially one at a beach or swimming pool.” Had she gone on Google Search to find the definition, I guarantee it would have taken a good 10 minutes between reading through the results and looking at pictures, with the effectiveness of the search not being any better because of the screen.

While not a total cure, ambient computing could provide a good detox program that will allow us to let go of some screen time without letting go of our control.

Virtual Reality: the Ultimate Screen Addiction

Virtual Reality is at the total opposite spectrum of Ambient Computing as it offers the ability to lose yourself in the ultimate screen experience. While we tend not to talk about VR as a screen the reality is that, whatever experience you are having, it is still delivered through a screen. A screen that rather than being on your desk, on your wall, or in your hand is on your face through a set of glasses of various shapes.

I don’t expect VR to be something we turn to for long period of times, but if we have ever complained about our kids or spouses having selective hearing when they are gaming or watching sports we got another thing coming!

There are talks about VR experiences that are shared with friends but if multiplayer games are something to go by, I am expecting those “share with friends moments” to be a minor part of the time you will spend in VR. With VR being so much more immersive I think the potential to be in an experience with someone you do not know like you do with traditional gaming, might be a little more involved and overwhelming. Coordinating with actual friends might require too much effort worth making if you are experiencing a music or sports event but maybe not so much if you are just playing a game.

Escapism will be the biggest drive for consumer VR which is the biggest reason for wanting to be cut off from reality.


Augmented Reality: the Key to Rediscovery

Augmented Reality is going to be big, no question about it. Now that Apple will make it available overnight to millions of iPhones and iPads as iOS 11 rolls out, consumers will be exposed and engaged with it.

What I find interesting is the opportunity that AR has to reconnect us to the world around us. If you think about the big excitement around Pokemon Go was that people went outside, walked around, exercised. Because humans do not seem to be able to do anything in moderation, that feel good factor vanished quickly as people were paying more attention to the little creatures than to where they were going culminating in incidents of trespassing, injuries, and fights!

That said, I strongly believe that there are many areas that AR can help with in our rediscovery of the world around us from education, to travel, to nature. Think about Google Translate and how it helps lower the barrier of entry to travel in countries where you do not speak the language.

The trick will be not to position the AR experience as the only experience you want to have. AR should fuel an interest to do more, discover more, experience more.

Of course, developers and ecosystem owners are driven by revenue rather than the greater good of humanity. Yet, I feel that lately the level of concerns around the impact technology is having on our social and emotional skills is growing enough to spur enough interest to drive change.

Ultimately, I believe that our addiction to screens is driven by a long list of wrong reasons. Our obsession feeds off our boredom, our feeling that idle time is unproductive time, and the false sense of safety of connecting to others through a device rather than in person. New technologies will offer the opportunity to change if we really want to.

Tech in the Heartland

Having just spent a few weeks vacationing in a part of the US where an abundance of corn and limestone-filtered water, along with a predilection for distilled beverages led to the creation of our country’s most famous native spirit—bourbon—I’ve regained a sense of life’s priorities: family, food and fun. (And for the record, the Kentucky Bourbon Trail is a great way to spend a few days exploring that part of the world—especially if you’re a fan of the tantalizing golden brown elixir.)

Of course, while I was there, I also couldn’t help noticing what sort of technology was being used (or not) and how people think about and use tech products in that part of the country.

Within the many distilleries I visited, the tech influence was relatively modest. Sure, there were several temperature monitors on the mash and fermentation vats, a few industrial automation systems, and I did see one control room with multiple displays and a single Dell server rack that monitored the process flow of one the largest distilleries, but all-in-all, the process of making bourbon is still decidedly old school. And for the record, that just seems right.

As with many traditional industries, the distilled spirits business has begun to integrate some of the basic elements of IoT technologies. I have no doubt that it’s modestly improving their efficiency and providing them with more raw data upon which they can do some basic analysis. But it also seems clear that there are limits to how much improvement those new technologies can make. With few exceptions, the tools in place appeared to be more focused on codifying and regulating certain processes than really driving major increases in production.

Ensuring consistent product quality and maximizing output are obviously key goals across many different industries, but the investments necessary to reach these outcomes and the return on investment isn’t necessarily obvious for any but the largest companies in these various industries. And that’s a challenge that companies offering IoT solutions are going to face for some time.

What became apparent as I observed and thought about what I saw was that the technology implementations were all very practical ones. If there was a clear and obvious benefit to it, along with a comfort factor that made using it a natural part of the distillation, production or bottling process, then the companies running the distilleries seemed willing to deploy it. And if not, well, that’s why there’s still a of traditional manufacturing processes still in place.

That sense of practicality extended to the people I observed as well. People I saw there were using products like smartphones and other devices as much as people on the coasts—heck, my 93-year-old mother-in-law has an Amazon Echo to play her favorite big band music, uses an iPad every day to play games, and maintains a Gmail account to stay in touch with her children, grandchildren, and great-grandchildren—but the emphasis is very much on the practical, tool-like nature of the devices.

I also noticed a wider range of lower-cost Android phones and less iPhones being used. Of course, much of that is due to income discrepancies. The median household income in the commonwealth of Kentucky is $43,740, which is 19% lower than the US median of $53,889 according to the latest US Census Bureau data, and almost half as low as the San Francisco county median income of $81,294. Given those realities, people in many regions of the US simply don’t have the luxury to get all the latest tech gadgets whenever they came out. Again, they view tech products as more practical tools and expect them to last.

There’s also a lot more skepticism and less interest in many of the more advanced technologies the tech industry is focused on. Given the limited public transportation options, cars, trucks and other forms of personal transportation are extremely important in this (and many other) part(s) of the country—I’m convinced I saw more car dealership ads on local TV and in local newspapers than I can recall seeing anywhere—but there’s absolutely zero discussion of any kind of semi-autonomous or autonomous driving features. People simply want good-looking, moderately priced vehicles that can get them from point A to point B.

In the case of AI and potential loss of jobs, perhaps there should be more concern, but from a practical perspective, the bigger worries are about factory automation, robotics and other types of technologies that can replace traditional manufacturing jobs, which are more common in many parts of middle America.

Also, the idea that somehow nearly everything will become a service seems extraordinarily far-fetched in places similar to where I visited. That isn’t to say that we won’t see service-like business models take hold in major metropolitan areas. However, it’s much too easy to forget that most of the country, let alone the world, is not ready to accept the idea that they won’t really own anything and will simply make ongoing monthly payments to untold numbers of companies providing them with everything they need via an equally large number of individual services.

As Facebook’s Mark Zuckerberg has started to explore, an occasional view outside the rose-colored perspective of Silicon Valley can really help shape your perspective on the real role that technology is playing (or might soon play) in the modern world.

AMD Puts More Pressure on Intel with Threadripper and Ryzen 3

With the release of the Zen architecture and the Ryzen product family for consumer PCs, AMD started down a path of growth in the processor market that it has been absent from for basically a decade. The Ryzen 7 processor family directly targets the Intel Core i7 line of CPUs that have been incredibly dominant and turns the market on its side by doubling core and thread counts at like price points. The platform surrounding the CPU was modernized, leaving very little on the feature list that AMD couldn’t match to Intel’s own. Followed by the Ryzen 5 launch a few weeks later, AMD continued the trend by releasing processors with higher core and thread counts at every price bracket.

More recently the EPYC server and data center processor marked AMD’s first entry for the enterprise markets since Opteron, a move that threatens at least some portion of the 99.5% market share that Intel currently holds. By once again combining higher core counts with aggressive pricing, EPYC will be a strong force in the single and dual-socket markets immediately, leaving the door open for further integration with large data center customers that see firsthand the value AMD can offer compared to the somewhat stagnant Xeon product family.

Though reviews aren’t launching for another couple of weeks, on Thursday AMD showed all of its cards for the summer’s hottest CPU launch, Ryzen Threadripper. With the hyper-aggressive naming scheme to go along with it, Threadripper will be a high-core-count processor and platform, based on the EPYC socket and design, targeting the high-end desktop market (HEDT) that Intel has had to itself for nearly that same 10-year window. Intel was the first to recognize the value of taking its Xeon product family, lowering features a slight degree, and then sell it to eager PC enthusiasts that want the best of the best. Families like Broadwell-E, Sandy Bridge-E, and most recently, Skylake-X that was released in June, have dominated the small, but very profitable, segment of the market for overclockers, extreme gamers, and prosumers that need top level multi-threaded performance.

CEO Lisa Su and CVP of marketing John Taylor took the wraps off the clocks, core counts, and prices in a video launched on the company’s YouTube page, along with a blog post from SVP of compute Jim Anderson, showing confidence in AMD’s message. Available in early August, Threadripper will exist as a 12-core/24-thread 1920X with frequencies as high as 4.0 GHz for $799 AND as a 16-core/32-thread 1950X hitting the same 4.0 GHz for $999. No doubt these are high costs for consumer processors, but compared to the competing solutions from Intel, AMD is pricing them very aggressively, following the same strategy that has caused market disruption with the Ryzen 7 and 5 releases. Intel’s $999 Core i9-7900X is a 10-core/20-thread part, putting it at a disadvantage for multi-threaded workloads despite having an advantage in single threaded performance based on architectural design.

Impressive speeds aside, what does this mean for AMD as we get into the heat of summer? I expect Ryzen Threadripper to be a high-demand product compared to the Skylake-X solution, giving AMD the mindshare and high margin space to continue seeing the benefits of its investment in Zen and the Ryzen family. Intel had already reacted to the Ryzen 7 launch with price drops and adjustments to the timing of Skylake-X but arguably not to the degree necessary to maintain price-to-performance leadership across the board. Threadripper will offer heavy multi-taskers, video editors, 3D animators, and other prosumer style users a better solution at a lower price point based on performance estimates.

AMD did release one benchmark metric for us to analyze until full reviews come out in August. Cinebench R15 is an industry standard test that runs a ray-traced rendering pass on a specific data set, timing it to generate a higher-is-better score. The Core i9-7900X, the current flagship part from Intel that sells for $999, generates a score of 2186. The upcoming Threadripper 1920X (12-cores) scores 2431, 12% higher than the Intel processor that costs $200 more. Like-for-like pricing competition from the Threadripper 1950X (16-cores) scores 3046, a full 39% faster than what Intel currently has on the market.

Intel has on its roadmap for releasing Skylake-X parts up to 18-cores but we won’t see them until September or October, and prices there hit as high as $1999.
Along with the high-end desktop announcement of Threadripper, AMD also revealed some details on the Ryzen 3 processor SKUs that will offer direct competition to the high volume Intel Core i3 family. The Ryzen 3 1200 will have 4-cores, 4-threads and a clock speed running up to 3.4 GHz while the Ryzen 3 1300X is 4-core/4-thread and a 3.7 GHz peak frequency. The advantages that AMD offers here remain in line with entire Ryzen family – higher core counts than Intel at the same or better price. The Core i3 line from Intel runs 2-core/4-thread configurations, so Ryzen 3 should offer noticeably better multi-threaded performance with four “true cores” at work. No pricing information is available yet, but the parts should be on store shelves July 27th, so we will know soon.

In the span of just five months, AMD has gone from a distant competitor in the CPU space to a significant player with aggressive, high-performance products positioned to target market share growth. The release of Threadripper will spike the core-count race in consumer devices, enabling further development for high-performance computing but also gives AMD an avenue for higher-margin ASPs and the “halo product effect” that attracts enthusiasts and impacts the buying decisions for all product families below it. AMD has a long way to go to get back to where it was in 2006 but the team has built a combination of technology and products that might get it there.

Much Has Changed in Tech in 10 Years…But Much Has Not

There have been numerous columns by thought leaders over the past couple of weeks commemorating the 10th anniversary of the introduction of the iPhone and its impact on consumers, businesses, tech, and various industries. As sort of a companion piece, I’ve been thinking about some other momentous changes in tech over the past decade…as well as some areas we thought we’d be further along.

What Has Changed in a Big Way.

A Computer In Your Pocket. If we knew then what we know now, we should have called smartphones ‘pocket computers’ or some alternative moniker. It’s a phone, camera, media player, navigation device, e-reader, gaming device, health/fitness tracker, etc. – all in your pocket, doing much of what a PC can do, and in many instances more easily, quickly, and nimbly.

Cloud Everything. The other huge growth space over the past ten years has been cloud. It has had an enormous impact on the enterprise market. But one way to think of how cloud has changed things in a big way for the everyday consumer is the fact that if your laptop breaks or you get that dreaded ‘blue screen of death’…it is not ‘cataclysmic’ in the same way it used to be. If e-mail, documents, and media are stored in the cloud (as they should be), the PC has become the device to create content and access your (cloud stored) content.

Broadband Ubiquity. We are now accustomed to having broadband nearly everywhere. It might still be in vogue to complain about the cable company or the relative monopoly in fixed broadband, but broadband speeds have improved steadily, and prices have remained stable. In wireless, LTE is ubiquitous and fast in most places, and data plans allow for nearly unlimited consumption, with some safeguards.

Massive Improvements in Voice Recognition. AI and services such as Siri, Alexa, Google Assistant, Cortana, and so on would not be possible without the significant improvement in voice recognition. It’s gone from barely usable in voice response systems to working with fairly high accuracy, enabling a new wave of apps, devices, and ways to communicate.

Rise of Social Networking. Facebook, Twitter, Snapchat, and LinkedIn have become a huge part of our communications, and content creation/consumption fabric. Each of them with its own purpose, and some used by certain market segments more than others.

Content Creation and Consumption. Three big changes here. One, is the proliferation of content sources and channels, (Netflix, Hulu, YouTube, etc.), with the full continuum of content quality, from fantastic to awful. Second, is the how we consume content, with the expectation that all content is available on-demand. And third, the growth of user-generated content, enabled by digital technology and smartphone video cameras.

What Hasn’t Really Changed.

This list applies to aspects of mainstream tech that haven’t really changed or changed less than we would have thought by now.

Still Using PCs. Smartphones are ubiquitous and tablet growth has plateaued. But the ‘post-PC’ era has not materialized as some predicted during the peak of the 2010-14 tablet boom. Most consumers and business professionals still mainly use a PC or laptop as their default device. There’s lots of experimentation with hybrid devices, Chromebooks, and the like, but PCs look like they’re here to stay, in some shape or form, for the next several years.

The E-Mail Experience Has Not Changed. Texting and enterprise messaging solutions such as Slack have proliferated. Voicemail is hardly used by anyone under the age of 30. But e-mail is still the mainstream form of messaging, especially for professional purposes. And nobody has yet found the formula for making e-mail more manageable. There’s some AI, and features at the margins, but most of us still wade through our email (and more and more spam) in the same way we did 10 years ago.

Digital Wallet/Mobile Payments Not Mainstream. Square, Venmo, Apple Pay, and Samsung Pay are all great, in that the technology is there and the user experience is good. But the digital wallet and mobile payments are still not mainstream enough that you really can leave your wallet at home. This is an industry problem, not a technology problem.

TV Hasn’t Really Changed. There has been growth in cord cutting, skinny bundles, and ‘sticks’ to enable access to Netflix, Amazon, and the like. But if anything, the experience of watching TV has become more complex, and each ‘skinny bundle’ offering comes with a major ‘asterisk’ (i.e. no local channels, sports, CBS, etc). This category is still in the ‘going to get worse before it gets better’ phase, like living through a house renovation.

Broadband Remains a World of Haves and Have Nots. Above, I mentioned how fixed and wireless broadband speeds for the average urban/suburban customer have continually improved. But for the 15% of U.S. households, and a couple of billion users globally, fast broadband remains elusive. There’s lots of work going on in this area, with folks from Google to Qualcomm to Facebook exploring creative solutions to spread broadband to poorly served areas. But this is tough slogging, and will remain one of tech’s biggest challenges over the next ten years.

Electronic Medical Records. With smartphones, apps, and digitization of nearly everything, it’s still surprising to me that most doctor visits involve filling out some poorly photocopied form, with one’s personal info and medical history, by hand. Tons of money has been thrown at this, and there’s been some change, but it’s really in pockets.

Apple’s Health and Fitness Push Accelerates as it Turns 3

In what was one of the most packed WWDC keynotes in recent memory, the Apple Watch got under 15 minutes of stage time, and health and fitness features got only a fraction of that. But that’s not really indicative of all the additions to Apple’s health, fitness, and broader wellness features being made this year, and it’s certainly not indicative of Apple’s commitment to the space. I spent some time this week getting briefings about both what’s new in Apple’s own software, and what developers and others are bringing to the party.

Four Key Domains in Health

Apple’s focus in health, fitness, and wellness is clear from the moment you open its Health app – it highlights four key domains for which the app can track data:

Apple’s approach to everything it does has always featured hardware, software, and services, and Health is no exception. But in this area, perhaps to a greater degree than elsewhere, Apple relies on third parties, with much of the heavy lifting in three of the four domains listed above being done by outsiders, and Apple focusing mostly on the Activity area with its own products and features. Those third-party contributions are, of course, enabled by tools provided by Apple, mostly in the form of SDKs and APIs which outsiders can use to build software and integrate into Apple’s various systems.

An Increasingly Comprehensive Play in Medical Too

That’s especially true in a fifth domain, which isn’t as visible in that home tab of the Health app, but is nonetheless important to Apple’s efforts in this area, and I’ve called that medical for the sake of distinguishing it from the other domains. The table below illustrates the roles played by Apple’s own first party products, its tools for third parties, and then the products and services provided by third parties in all this work, with areas that will change or have augmented features or functionality this year highlighted in red:

As you can see, at this point the combination of Apple’s own products and features and those provided by third parties is pretty comprehensive at this point, and you can hopefully also see the number of areas where new features in watchOS 4 are enabling new functionality either in the built-in apps and hardware or through third parties.

What’s New This Year

I want to drill down briefly on some of the things that are new this year, because they got such short shrift at WWDC but some of them are pretty notable. Here they are in bullet point form:

  • Enhancements to the Workouts app: following the new hardware from last year which introduced GPS and water resistance to the Watch, watchOS 4 adds additional functionality, including more sophisticated tracking of swimming workouts, optimized tracking for High Intensity Interval Training (HIIT), easier switching between workout types and general usability improvements.
  • Changes to the Activity and Breathe apps: each of these apps is getting some subtler changes, with the Activity app getting some smarter coaching which is more personalized than the current more generic reminders and prompts, with some really clever stuff coming here; and the Breathe app getting better explanations of why you might want to take a break and do some deep breathing, in the form of described health benefits.
  • GymKit and connected fitness machines: I saw a demo of the new integration with fitness machines, and this is going to be a big deal for anyone who does gym workouts, which the Watch naturally can’t track as well using GPS or motion sensors. The integration here is very clever, with data flowing both ways, meaning that data from the Watch can show up on the much larger, always-on screen on a treadmill or exercise bike alongside the data it captures itself. I think there’s an opportunity here for deeper integration with iOS devices to replace the corded connection some machines offer today, for things like projecting notifications or videos onto the gym machine, but the fitness-focused integration Apple is starting with here is a great start. Getting these machines quickly adopted in gyms will be the key, and I understand Apple will be talking more about how this is going to happen later in the year.
  • Core bluetooth on the Watch for more devices: Apple has enabled core bluetooth on the Watch for heart monitoring chest straps from the beginning, but not for a broader range of devices. In watchOS 4, it’s opening up core bluetooth to other devices too, enabling other body sensors and even medical devices like continuous glucose monitors, as well as connected fitness equipment like tennis racquets or golf clubs.

The Medical Domain is Coming Into its Own

Beyond the things Apple itself is going, I got to see quite a few apps and devices from third parties in my meetings this week, and one of the things that impressed me most was the innovation being done in what I labeled the medical domain above. Between the original HealthKit and the additions since in ResearchKit and CareKit, Apple is enabling some really interesting work by doctors, device vendors, and medical facilities which leverages Apple devices to do things that would otherwise have been impossible or a lot more difficult. Some examples include:

  • The Propeller Health asthma inhaler sensor and accompanying app, which automatically track when the inhaler is used and also invite users to track their symptoms and environmental conditions manually, pulling in third party data. The solution is designed to help increase “adherence” (the faithfulness with which patients adhere to a treatment plan such as using a preventative inhaler daily) and understanding of what triggers symptoms for better management. These products have been available through medical professionals but are now going direct to consumers as well.
  • A WebMD Pregnancy app which includes a ResearchKit component that is allowing researchers to learn a lot about pregnant women and their symptoms, which are surprisingly under-researched, especially in remote and rural areas far away from where most studies are conducted. The app is going to be most useful with women who have access to regular blood pressure and other vital signs measurement, but is already generating a much higher participation rate from rural areas.
  • A Sharp Health app for helping eye surgery patients get ready for and recover from their procedures. The app helps ensure that patients don’t have surgeries canceled because they forget about necessary steps to take beforehand like stopping blood thinners, and again helps with adherence after surgery, as well as giving them options for talking to medical professionals if they have questions during their recovery. The basic model here could certainly be applied to other procedures by other health systems too.

More Work to be Done

As I said earlier, it’s starting to feel like Apple has an increasingly comprehensive set of hardware, apps, tools for third parties and a growing ecosystem of apps and devices from others in this health domain. But in quite a few of these areas, it feels like we’re still just scratching the surface of what can be done, especially in the medical field, where things still tend to move very slowly and where comprehensive electronic patient records are still more of a dream than a reality. But Apple is helping here by providing tools that professionals and companies with the appropriate medical pedigrees and qualifications can tap into, while focusing on what it does best.

One question I had for the Apple folks I talked to was how it decides which domains to play in itself versus leaving them to third parties – for example, it’s added some sleep-related functionality such as the Bedtime feature, but still doesn’t do its own sleep tracking, and doesn’t really have much of a first-party play in nutrition tracking either. The answer I got was the classic Apple one: Apple tends to participate directly in a market only where it feels like it can do something unique and different. For now, that means there are plenty of areas where others are better qualified and equipped to make a difference and provide the features and functionality users need. Discovery of these in the App Store and elsewhere is going to be key for enabling users of Apple’s ecosystem to make the most of all this, and that’s an area where the App Store changes Apple announced at WWDC should help.

Apple is never going to be done in this area, and neither are its partners or its competitors. There’s lots of work still to be done by all these players in a field that I suspect is going to receive increasing attention from the tech industry over the coming years, even as politicians argue over the best ways to manage the funding of healthcare and the structure of insurance plans that will pay for much of this. I’m hopeful that we’ll see much faster change and greater benefits coming on the technology side, and this week I saw promising signs in that direction.

It is not Women in Tech, it is Women in Business We Should talk About

I have been thinking about writing something on women in tech and what we have been witnessing over the past few months and I had resisted thus far. Last week, however, as I was a guest on the DownloadFM podcast I was asked my opinion about the many stories we have read in the press concerning childish CEO behavior and continued allegations of sexual harassment, starting from Uber to 500 Startups, and I could no longer shy away. After all, I am in tech, I am a woman, and I have an opinion on the topic.

We expect more from Men in Tech

Women face discrimination, chauvinism, and harassment in pretty much any business they are in. For some reason, however, I think the disbelief around some of the stories that have emerged in tech comes from assuming that men in tech would be different, evolved, better. Better than the men who run Wall Street and I better than the men on Capitol Hill. That hope is buoyed by the fact that men in tech are by and large well-educated, well-travelled, they are entrusted with building our future. Men in tech are also by and large white and entitled and often with poor social skills when it comes to women. Of course, there are exceptions, but they are, alas, exceptions.

You start to believe it is You, not Them

I have been a tech analyst for 17 years, and while I have seen more women in tech, I still get excited when there is a line for the ladies’ bathroom at a tech conference. I still pay attention to how long it takes for a woman to be on stage at those tech conferences. And while it seems that all the big corporations have increased the number of women on stage, if you pay attention, you notice, that most of those women on stage are performing demos and they are not upper management.

When I got pregnant with my daughter, female as well as male colleagues, told me that my priorities would change and I would not work as hard. I was expecting it from my male colleagues, but it was disappointing to hear from my fellow female colleagues that it was expected of me to want to do less. The implication of course, if I did not feel that way, was that I was a bad mother.

In many occasions, I was told I was emotional; I was asked if it was that time of the month; I was told to grow a pair. In meetings, I have been interrupted and talked over by endless male colleagues, mistaken for my colleague’s secretary and right out ignored after making the mistake to serve coffee to meeting guests. At the start of the smartphone market, I was handed over pink phones with a lipstick mirror. I’d love to ask Walt Mossberg if he ever reviewed one of those! On Twitter for complementing an actor’s launch of a tech product, I was told I was “throwing my knickers” at him. I have been the token woman on tech panels, and I was invited as a guest on a radio show because “the audience responds better to women talking tech.” And the list goes on.

Things like this happen all the time to many women. They happen so often that you start to think it is the norm, or that you are reading it wrong and taking it personally. Whether you think it is wrong or not becomes irrelevant though when you consider how hard you worked to get to where you are and how much further you want to go. So, you ignore it, you smile, and move on. You do what Irish reporter Caitriona Perry did in the Oval Office a few weeks ago.

 

Avoiding Discrimination 3.0

If things have not changed up to now why is it important that they do? Why does it matter so much that men in tech must understand enough is enough? Because what is going to happen when everybody in the room looks alike and behaves the same way? And of course, this applies to gender as well as race, religion, politics.

We are at a time when we are training machines to think like us. What a scary thought when it comes to women in business. What will happen when machines consider physical and psychological traits based on the beliefs that dominate society today? What if men, who claim they did not know it is not normal to make advances in work situations train computers to think it is normal too? Will women be negated roles a priory based on the belief that “it’s much more likely to be more talking” if too many women are part of the board? Are we really building a better society if we move from paying a woman by the hour for sexual favors to buying an AI-enabled doll that will respond to its master just the way a male engineer has designed it? What will happen if self-driving cars are taught that a woman is more dispensable than a man when it comes to life and death situations?

We can rejoice at having female emojis with more professions and we should. We should continue to foster STEM among female students but know that just because they can do the job it does not mean they will be given the opportunity to do it. Let’s lean on the strong female role models we have. Let’s be supportive. Let’s have each other’s back. A smart woman said recently that we should not just be happy to be in the room where it happens. We should be sitting at the table and make it happen. So, let’s do that, let’s stop thinking it is us, let’s stop thinking it is normal and let’s get a seat at the table.

The Path to AR for the Masses Will Go Through the Smartphone

Apple’s ARkit has been getting a lot of attention and for good reason. In fact, if you want to keep a close eye on all the really interesting, cool, and innovative things developers are trying already with ARkit, follow this Twitter account called Made with ARkit. This account has been finding, and tweeting, a number of really interesting developer applications and experimentations as they get to know Apple’s ARkit and experiment. There have been two observations around Apple’s ARkit that have stood out to me.

The first is the extremely high praise it has been getting even from many of Apple’s harshest critics. I’ve seen folks who have been very vocal, with large following on Twitter, about their dissatisfaction with Apple’s tools, new tools and OS features, and more, sing Apple’s praise with ARKit. It appears consensus not just from developers but even from Apple’s harshest technical and developer critics is that Apple nailed ARkit and knocked this one out of the park.

The second observation is how excited many developers seem to be over ARkit and the tools themselves. I can’t keep track of the number of tweets from developers remarking on how easy, quick, and natural to their existing development workflow, adding ARkit features to their apps has been. It has only been a few weeks and already a number of big and small developers have been going full speed to bring a plethora of apps which include experiences never seen before to the iPhone this fall.

As much as we have been studying and tracking augmented reality, even I did not expect it would move this fast. Not just with developers embracing it but with incredibly compelling use cases we are seeing them build into their apps that I can actually see real humans using and finding value with. We are, no doubt, about to see a whole new app development era that turns our phones into windows to view and engage with digital objects and the physical world at the same time.

It should come as no surprise that this new app development era will take place on the smartphone first. This is a device everyone already has. Includes the core technology from processing, GPU, image sensors and more. These app experiences have always felt like a natural extension to existing apps for things like commerce, travel, education, games, etc. The key has always been getting a critical mass of developers to take hold of the technology, make it their own, and experiment in new ways. This is what we are on the cusp of watching happen.

It should also come as no surprise this AR rush of apps and development will happen on iOS first, and in some cases there will be many iOS only AR experiences. Apple has not only the best and most valuable customers willing to buy and engage with these new apps but also the most robust and creative developer community of any platform. Apple also has a unique hardware advantage in their control over hardware fragmentation. Apple can guarantee developers a smaller set of hardware variables to develop for in things like CPU/GPU/camera and camera sensor, which allows a much larger active installed base of devices to take advantage of their innovations. ARkit apps will be supported on all A9 and A10 devices. This gives developers hundreds of millions of devices day one that can access their apps. No other platform on the planet can offer this massive reach to developers. You can be sure they will maximize it in every way possible.

I also feel consumers are ready as I pointed out in this article how AR app and experiences will sneak up on people. This goes beyond something as simple as Pokemon Go, but strikes at the deeper reality that many people experience some form of augmented reality today and just don’t realize it. Interestingly, I came across a survey which asked people the types of apps they have used in the last 30 days and 3.9% said an Augmented Reality app. Even if we just looked at Snapchat and Pokemon Go alone that number would be significantly larger than 3.9% but it makes my point that most people don’t realize or associate it as augmented reality. I also don’t think consumers are going to go hunt for AR apps on the basis of AR itself. Rather, I think people will discover new things their apps can do and AR will add value to many existing experiences they have today. Of course, there will be some new games, and app types, but it will be the app experience and the feature AR enables that drives consumers to want to download it not because it classifies as “augmented reality.”

Lastly, and I thought this was a controversial opinion until I tweeted it and a bunch of folks responded to me saying it wasn’t, but I think Android is really going to struggle with augmented reality for a while. We can use Project Tango as an example, but I am not surprised developers didn’t embrace that en masse. What is going to hit Android hard here is the hardware fragmentation that exists on the platform. It is going to take a few years, at least, for Android devices to have a critical mass of hardware capable of even remotely decent AR experiences. This means developers won’t waste resources on the platform for some time since they won’t have a large enough potential customer base to go after. Big apps like Yelp, Facebook, Snapchat, etc., will bring these features to their app but many devices won’t support the more cutting edge features. The bottom line is, Android’s tough development environment because of the thousands of device and hardware configuarations is going to make it tough as an AR platform. This opens the door for Microsoft in some regards for Windows as a second platform for AR development, but minus a mobile phone OS, that could be limited to more semi-mobile or fixed AR experiences. The point remains for Microsoft, they will be a more developer friendly platform for AR than Android in the short term so it will interesting to see how they move that forward and attract software developers for Windows mixed reality/augmented reality experiences.

My Test Drive in a Self-Driving Car

One of the most fascinating areas of research we get to do these days is to look at the technology behind self-driving cars and try and make sense of this new thrust in automated vehicles.

Like most of us researching this field, we now believe that self-driving cars will, over time, drastically reshape the way we use automobiles and move more and more people to either some type of ride-hailing transportation model or actual ownership of a self-driving car themselves.

Although this transition may take as many as 20-25 years to move the majority of people to using these types automated vehicles for their personal transportation, it really is just a matter of time before this happens.

At the moment, this concept is pretty radical to most people and most are highly reluctant to turn over the driving of a car they are in to an automated robot driver today. Of course, self-driving cars are not actually ready for prime time even if the technology to deliver a self-driving car is on the horizon. Most car companies believe they can have fleets of vehicles ready for many major cities to use in an on-call fleet model by 2020-2021 in which a person can just call up a self-driving car at will and it picks them up and takes them to their destination. The reality is this is only 3-4 years away. And they also tell me that people will be able to buy fully automated vehicles for their own use by as early as 2022-2024.

Like most people who have had control of the wheel of a car all of their lives, I too was reluctant to go on a test drive in a self-driving car to experience what it is like to understand not only how this works but also to get a grasp of its ultimate potential. The opportunity recently came up for me to do this type of test drive as part of my work with the State of Hawaii and their current Governor, David Ige. I first got involved with helping Hawaii in the late 1990’s when then Governor, Benjamin Cayateno, asked me to help work on a program to entice tech companies to Hawaii. Under his leadership, Hawaii passed a special law to give tax incentives to tech companies who would set up offices in the Islands with the hope of getting more IT students from the islands employed at home instead of having them go to the mainland for jobs. The program was only mildly successful but unfortunately did not meet the real objectives they had hoped for it.

During that time I got to meet and work with David Ige, who was a State Senator at the time and as an electrical engineer, was very helpful in getting this bill passed. He is now the Governor of Hawaii and over the years he and I have had various conversations about what is hot in tech and Silicon Valley. Since he has become governor, at least once a year I visit him at his office to talk about the world of technology and things that I believe will impact the State of Hawaii. In my meeting with him last March, I shared with him what was happening in the area of self-driving cars, something that he and his transportation folks were already looking at closely. During our talk I suggested that the next time he came out to Silicon Valley he and I visit some of the major players creating the brains behind self-driving cars as well as Google, who is a major player in promoting and creating self-driving car technology and designs.

So in early April, during a scheduled trip to San Francisco, he carved out an afternoon and he and I went to visit Nvidia and Google to get an update on where things are in automated vehicles. My key objective for the Governor was to give him a better idea of what was happening now in this area and get him thinking about creating a plan for the State of Hawaii to allow for testing of self-driving cars soon as well as start to work on what will eventually be state and local laws needed to govern self-driving cars in the Hawaiian Islands.

It was during our visit with Google’s Waymo group that he and I were given a test drive in a Waymo vehicle and got a chance to experience a self-driving car in action. This was fascinating and enlightening and made it clear to us that the technology to deliver automated vehicles is much closer to reality than many believe. In our test drive, there was a person in the driver’s seat who just pushed the button to start the car and set it in motion. They had put in all of the driving details before we got to the car and once started, the car took off on the designated route. During that time the driver never touched the steering wheel, brakes or accelerator and the car drove and navigated every street light accurately, stopped for pedestrians in cross walks automatically and even stopped quickly when a cyclist cut in front of us.

In the right seat was another person who had a laptop that was showing us what the car was seeing. The view they showed us was what the cameras and sensors saw, how the car was using these tools to navigate the road ahead and made it clear that this vehicle was pretty much all seeing and all knowledgable, sensing every line, stoplight and moving object in a 360 degree radius as we cruised the streets of Mt. View, CA.

Taking a test drive in a self-driving vehicle and seeing not only how it works but also how flawless the technology behind it performed, more than convinced me that the technology itself is ready to deliver on the promise of an automated vehicle sooner than later.

But it also made me understand that besides the regulatory issues that have to be solved on Federal, State and city level, as well as many other things that have to be done at the technology level before we get these types of self-driving vehicles on our streets, convincing people to trust a self-driving car to ferry them around may be a tough sell. I received my license when I was 16 years old and have driven cars and motorcycles since then. They present a very familiar way of transportation for me, and after decades of practice, I consider my self an accomplished driver. I suspect that for most people over 30, driving has become second nature and being in control is something that we like from our driving experience.

Of course, the fact that we can’t control the actions of others is why self-driving vehicles make so much sense. As I saw in the Waymo example, the technology employed in an automated vehicle has 360 degrees of sight as well as sensors that could anticipate the cyclist I mentioned above and stop way in advance before hitting this person. In essence, this automated car is much smarter than a driver and can act even faster with greater knowledge of the cars surroundings and respond quickly to almost all situations it encounters.

I still believe that it will take a lot of convincing before most drivers are willing to give up control of their vehicles and fully trust a self-driving car. If you are a technology, early adopter, as I am then perhaps you will be willing to jump in a self-driving vehicle and let it take you away. In fact, it will be the early adopters who will be the first to let a self-driving car serve as their robot chauffeur initially. For some seniors and those with issues that keep them from driving, a self-driving car would be a godsend at any age to give them the flexibility to go anywhere they want once these types of cars hit the road.

However, even with these cars being able to be on the road and in fleet service by 2020 and available to purchase by 2022-2024, I think it may take as many as another 10-20+ years before we see what we call a more mass market for self-driving vehicles. Even though the technology will be ready, I sense that it is going to take the public much more time to come to trust these automated vehicles before they take what will be a leap of faith and trust them to cart them around safely.

Has the Tablet’s Window of Opportunity Closed?

I’ve been testing Apple’s latest iPad Pro 10.5-inch tablet. It’s a very good piece of hardware, and when iOS 11 moves from beta to full release later this year, the software will represent a significant leap forward, too. With the launch of this product, Apple jumpstarted the debate about whether an iPad can replace a Mac or PC. But as good as the iPad Pro is, I can’t help but think that all the hand-wringing about tablets versus notebooks is just misplaced angst. Today’s users have already chosen their platform, and future generations will likely choose neither, opting instead for increasingly powerful smartphones that will usher in brand new ways of computing.

iPad Pro: Accelerated Iteration
Apple launched the first iPad in 2010, and just seven years later this new iPad Pro represents a stunning amount of product evolution. The A10X Fusion chip offers processing power on par with some PC CPUs. The 10.5-inch screen includes new True Tone and ProMotion technologies. The first calibrates the screen’s colors based on the ambient light conditions. The second ramps up or down the screen refresh rate based on the content and also makes using the optional Apple Pencil feel even more natural than before. The optional Smart Keyboard case makes it possible to bang through typing chores much faster then tapping on glass. As with previous iterations, the iPad Pro continues to offer plenty of battery life, and none of these new features diminish that. My unit includes LTE, which means unlike every PC or Mac I’ve ever owned, the iPad Pro is always connected.

The new features coming in iOS 11 are too numerous to list, but many of them are focused on making the iPad more productive. I’m running the public beta, and capabilities such as a viewable file system, support for drag and drop, and improved multitasking mean that I can accomplish more things than ever before on the iPad. But for me, it still can’t replace my notebook.

I’m a long-time tablet fan, and I use an iPad every day, usually after work, to consume content. I’m hooked, and will likely use a tablet for the rest of my days in some capacity. There are clearly many others like me, but I do wonder if the confluence of events back in 2010/2011 that caused many of us to pick up a tablet—namely a PC market that saw innovation slow to a crawl and a smartphone market made up of products with sub 4-inch screens—have now passed. With both of these challenges now addressed, where does this leave the tablet in terms of grabbing new users?

What’s Next?
Apple CEO Tim Cook has long argued that an iPad is the best computer for most people because it is less complex and therefore easier to use than even a Mac. It’s a compelling idea, but one whose window of opportunity may have already closed. Many long-time PC and Mac users may love their iPads for consuming content, but ultimately even the new iOS 11 represents too many restrictions for people who have lived with the freedom a full desktop OS. For these folks, the tablet is additive at best. And in emerging markets where the PC isn’t well-entrenched, people have already chosen the smartphone as their primary computing device. It has a large screen, plenty of compute power, and it’s always connected.

Ultimately, I expect the smartphone—or some future iteration of it—to replace both the PC and the tablet. In the near term that means phones will likely take on more desktop-like capabilities when needed, but longer term it means more fundamental changes. Eventually, augmented reality technologies will mean we’re no longer tapping on glass or staring at 5-, 10-, or 15-inch screens. Some think standalone AR devices will replace smartphones, but I tend to think the smartphone will still power most of these experiences. In fact, I’d argue that at some point smartphone screens may even begin to shrink, before they disappear altogether, replaced by accessories worn on the wrist, ears, and eyes, that serve up a wide range of augmented operating environments and experiences. In fact, Apple will likely lead this charge with iterations of today’s Apple Watch, AirPods, and its long-rumored glasses.

Of course, not everyone will be interested in embracing the smartphone as the one device to rule them all. Which means there will still be a place in the market for notebooks and tablets for years to come. And so Apple’s focus on making the iPad more capable today certainly isn’t a wasted effort. However, it is hard not to see the smartphone as the ultimate computing platform of the future.

The Voice Speaker Tipping Point

With reports this week that Samsung is readying a Bixby-powered voice speaker for the home, and an announcement from Alibaba that its entry in the category will be launching next month, it feels as though we’re reaching a tipping point in the market pioneered by the Amazon Echo. It seems as though pretty soon every major platform and device vendor will have an entrant in the market, signaling a new phase in its development. But this market isn’t quite like other markets that have gone before.

A Tipping Point in Voice Speakers

Amazon, which arguably created the voice speaker market with its Echo device in late 2014, had the market largely to itself for a good two years. Then, Google entered the market with its Home device late last year, and this year saw a slew of announcements at CES, mostly of Amazon Alexa-powered speakers, with an announcement last month by Apple and this week by Alibaba, among others. Things certainly seem to be picking up steam, as the diagram below shows:

Apple’s HomePod should be with us later this year, while Tencent has said it’s working on something in this space, Lenovo’s Smart Assistant was announced at CES but hasn’t become available yet, multiple speakers from Microsoft partners including HP and Harman Kardon are on the way, and Samsung is reportedly working on a speaker powered by its Bixby voice interface. On top of all those, there are quite a few others from smaller companies.

A Different Kind of Market

Most markets in consumer technology go through multiple phases, often pioneered by one or two companies who prove out the opportunity, followed by a rushing in of new players as the opportunity becomes obvious to others, and an eventual thinning and consolidation of the market as the winners begin to emerge. In the last few years, the rushing in phase has been characterized by an influx of low-cost Chinese competitors in markets as diverse as smartwatches, drones, virtual reality headsets, fitness trackers, and more.

That hasn’t really happened in quite the same way in the voice speaker market, for one obvious reason: this isn’t just another hardware category where free, off-the-shelf software gets you an instant global presence. Even though Amazon has opened up its Alexa platform for others, and we’ve seen a number of not just speakers but other devices launched which incorporate it, that platform is still fairly severely geographically limited. The Alexa Voice Service which device vendors can use is so far only available for the UK, US, and Germany. And of course Amazon as a brand may be present in many markets, but is only really popular in less than a dozen countries worldwide.

The Google Assistant so far only works in English, though support for other languages is coming shortly, but even once those roll out much of the world will be left without a voice assistant platform that speaks its language. Apple’s Siri, at least on iOS, supports many more languages, but it’s not yet clear which HomePod will support, and of course Siri isn’t a licensable platform.

Localization Beyond Language

But language isn’t the only localization challenge with voice assistants. These assistants need to understand local accents and idioms, know the right conversions for locally-used measurements, be familiar with television shows, movie stars, and sports figures in each country, and so on. And they need to integrate with relevant local entertainment, information, and other services. That makes expanding into other markets particularly challenging, and it’s yet another reason why most successful voice assistants will be part of broader ecosystems coming from big companies like Amazon, Google, Microsoft, and Apple.

However, that means the broader opportunity for voice speakers is nothing like as large as for other recently hot consumer electronics categories, with the long tail of cheap Chinese vendors in particular likely to remain largely absent. It’s possible that, with the entry of players like Alibaba into voice speakers and Tencent and Baidu into voice assistants, we’ll eventually see some expansion into lower-cost tiers. But this is likely to remain a highly regionalized market, to a far greater extent than any other recent consumer electronics category.

That’s important because there’s already a false narrative around a global market in voice speakers. Several of the news outlets that covered Alibaba’s announcement this week said it was a competitor to Google Home and Amazon Echo, but of course since those devices don’t work in China and Alibaba’s won’t work outside China, they’ll never actually go head to head.

Business Models Will Vary Too

The other interesting thing about the voice speaker market is that, for at least some of the players, it will be a means to an end rather than a lucrative business in its own right. It’s already clear, for example, that Amazon sees the Echo family and the Alexa platform as an opportunity to sell more stuff on Amazon.com, while Google plans to use advertising to create additional revenue streams on the somewhat cheaper Home. Apple, meanwhile, will take its usual tack of monetizing the complete package of hardware and software, though it will likely see some uplift in services like Apple Music off the back of HomePod sales as well.

In China, meanwhile, we’ll likely see these and other business models play out, with Alibaba’s device named after one of its popular online stores, and Baidu’s and probably Tencent’s efforts likely to be more ad-focused. All of this will lead to different pricing strategies for the hardware itself, with the early Chinese examples hitting price points roughly half those of the two early leaders in the US market, and Apple in turn pricing its premium speaker at roughly double those devices.

This is going to continue to be a fascinating market to watch unfold, one that won’t necessarily follow any of the established patterns from other recent hot devices. It will be more regionalized – even balkanized – and more varied in the business models than other device categories. And as a result we’ll likely see several major players taking leading positions in different regions around the world, rather than global winners as in smartphones, tablets, or PCs. Over time, we’ll certainly see the usual thinning and consolidation as some winners do emerge and smaller players fail to gain traction, but in the meantime it feels like we’re going to see lots more new entrants and interesting devices and business models.

AMD and NVIDIA Target Miners with Specific Hardware, Longer Production Times

The current state of the cryptocurrency mining rush is in a delicate state. The values of Bitcoin, Ethereum, and other smaller currencies have stalled out on the rocket-like trajectory they were on last month and have settled into a slower, more moderate cycle of growth. Last week I wrote a story that warned of a pending backfire for those betting heavily on the hardware portion of the mining craze, and I stand by the risk that AMD and NVIDIA must address as we prepare for the stabilization of mining difficulty that will make GPU-based usage models inefficient.

The early wave of sales spikes on graphics hardware were done at the previous pricing models but both AMD and NVIDIA are attempting to improve their return on cryptocurrency sales by raising GPU prices to partners in line with current market sales. Previously only the consumer facing resellers were seeing the advantages of the higher pricing, and it was only a matter of time before NVIDIA and AMD took their share. While in theory this might affect the MSRP for these parts in the GeForce and Radeon lines, in practice the current elevated prices will remain. Expect NVIDIA and AMD to lower their temporary price hikes when we see the demand for these cards die down.

In the last couple of days however, both AMD and NVIDIA add-in card partners began listing and selling mining-specific cards that separate themselves through reduced feature sets and lower pricing. NVIDIA is offering both GP106 and GP104 based hardware, equivalent to the mid-range GTX 1060 and high-end GTX 1070/1080 gaming cards, though without the branding to indicate it. Partners like ASUS, EVGA, MSI, and others are being very careful to NOT call these products by the equivalent GeForce brands, instead using something equivalent to “ASUS MINING-P106-6G”. To a seasoned miner, the name gives enough information to estimate the performance and value of the card but tells customers looking for gaming hardware that this one is off limits. Why? Many of these are being sold without display output connections, making them less expensive but nearly unusable for any purpose other than compute-based cryptocurrency mining.
AMD has partners offering similar options, some with and some without display output connectivity. The first wave are based on the Radeon RX 400-series of GPUs rather than the current RX 500 products.

These new offerings allow both AMD and NVIDIA to take advantage of the mining market to sell an otherwise untenable product. For AMD, after the launch of its RX 500-series of cards in April, any inventory of the RX 400-series needed to be sold at steep discounts or risk being held in warehouses for months. By targeting these products to mining directly, where they are still among the most power and dollar efficient for the workload, AMD can revive the product line without sacrificing as much of the price.

In other cases, for both AMD and NVIDIA, the ability to sell head-less graphics boards (those without display connectivity) offers the chance to sell GPUs that might have otherwise been sent to recycling. As silicon is binned at the production facility, any GPU without fully operational display engines would be useless to sell to a gamer but can operate as part of a cryptocurrency mining farm without issue. This means better margins, more sales, and overall more efficient product line moving forward.

Producing mining-specific cards should also benefit AMD and NVIDIA in the longer view, assuming they can make and sell enough for it to be effective. Because headless GPUs are not useful to the gaming community, they cannot be a part of the flood of products into the resale market to impact the sales of legitimate, newer gaming hardware from either party. This dampens the threat to GPU sales in the post-mining bubble, but only to the degree that AMD and NVIDIA are successful in seeding this hardware to the cryptocurrency audience.

The quantity of these parts is the biggest question that remains. Initial reports from partners indicate that only a few thousand are ready to sell, and mostly in the APAC market where the biggest farms tend to be located. But I am told that both vendors plan to ramp up this segment rather quickly, hoping to catch as much of the cryptocurrency wave as possible. AMD in particular has extended its Polaris GPU production through Q1 of 2018, a full quarter past original expectations. This is partially due to the outlook for the company’s upcoming high-end Vega architecture but also is a result of the expected demand for GPU-based mining hardware this year. AMD appears to be betting heavily on the mining craze to continue for the foreseeable future.

Sprint-Cable Deal Would Be Mixed Picture for Industry, Consumers

It was reported this week Comcast and Charter are in discussions with Sprint regarding an expanded MVNO arrangement, or a possible equity stake/outright acquisition of the company. Wall Street has weighed in on the cost synergies of a deal, plus the possible stock impact on the various stakeholders. But what would this mean for the industry, and consumers?

The biggest beneficiaries of the deal would Sprint and the cable companies. For Sprint, this could be a win-win. A tie-up with cable would provide needed capital and, potentially, valuable infrastructure (pole sites, backhaul) for its small cell centric network buildout. For cable, this would allow them to go ‘all in’ on wireless in a way that the current deal with Verizon doesn’t. Plus, if the joint network assets are effectively combined, this could really position the two entities to offer the wireless/broadband ‘network of the future’, combining DOCSIS/fiber, wireless towers (macro cells), small cells, and Wi-Fi.

The impact on the industry and consumers would be mixed. One negative is that the current imbalance in the level of competition between broadband and wireless would be maintained. In the current U.S. broadband internet (BBI) market, some 50% of urban and suburban areas have only one ISP option for 25 Mbps or better internet. This deal would further cement that concentration. Alternatively, a Sprint/T-Mobile combination could offer, in part, a competitive broadband alternative in a 5G world, and could possibly push Verizon and Charter together, plus other cable/telco deals that might result in a 2-3 player near national BBI landscape.

It’s a mixed bag for wireless, too. I have argued that consolidation from four national wireless operators to three would be good for the industry and consumers, especially given the capital needed to keep up with capacity demand and build 5G. It is very difficult to envision four national 5G networks, given the number of small cells that would be required.

A Sprint-Cable tie up would lead to a cascade of deal activity. DISH becomes the power broker, as a potential spectrum supplier to Verizon and/or acquirer of T-Mobile. This might also hasten further consolidation in the cable industry. I would also not rule out one of the big Internet players doing something, possibly Google or Amazon.

What is the impact on consumers? Well, the wireless industry isn’t all that healthy right now. All the major operators except T-Mobile are pretty battered and bruised from the ‘unlimited’ price war. Sprint is committing increasingly irrational acts to win subscribers, while cutting costs and under-investing in its network. With continued data traffic growth and a wave of investment needed to build out newly acquired spectrum and ultimately 5G, the industry needs to be on a healthier financial footing. A combination of T-Mobile and Sprint would be better for the industry and consumers, long term, than today’s four player battle royal. Actual cable company skin in the game increases the likelihood of a viable four player market. And the other deals that a Cable-Sprint deal could trigger might usher in more competition in broadband.

We also need to take the longer-term view here. Historically, mobile and broadband internet have been on their own separate islands. Now, with small cells, Wi-Fi, 5G, and more abundant spectrum, this starts to become more of a giant Venn diagram. Testament to this is the number of fixed wireless access pilots, deployments, and trials planned for the next year or so by Verizon and AT&T. If successful, 5G could become a viable broadband alternative. Between the results of those deployments and the various industry M&A scenarios, we’ll know a lot more about the future shape of the cable-telco-wireless space by this time next year.

The iPhone for the Next Ten Years

Given that this week marks the tenth anniversary of the iPhone going on sale, there’s lots of navel-gazing about the impact the iPhone has had on the industry (including my own take on Monday for subscribers). However, what I want to do today is think about which products in the market today might have a comparable impact to the iPhone over the next ten years.

I put this question to my Twitter followers, and got a whole range of interesting results, including:

  • Tesla (both cars and solar shingles)
  • Oculus Rift
  • Crispr
  • and the Nvidia DGX-1 for AI and machine learning!

Those are all fascinating answers, including a couple I never would have included in my own analysis here. But I have a different set of three possible products in mind, and I’ll talk about each of them below. As a reminder, what defined the impact of the iPhone was that it was a single product from a single company, and yet that product never achieved majority market share, but still managed to transform not just its own industry (smartphones) but both created and transformed others as well. So that’s the bar that any worthy successor has to clear.

Amazon Echo

To my mind, one of the products that has the best claims to this title over the next ten years is the Amazon Echo. Like the iPhone, it has essentially created a new category which really didn’t exist in the same way previously, and has captured the public imagination in ways few would have predicted. It’s done so with a new interface (much as the iPhone used its multi-touch interface as a key selling point) and has created value beyond Amazon’s own contributions through “Skills” or apps and integrations with other companies. In the process, it’s created a market that now also includes Google and will shortly include Apple, and that also includes many smaller manufacturers and products.

Apple Watch and AirPods

Although it might seem funny to include another Apple product (or two) in this analysis, these two feel emblematic enough of two emerging wearables categories to include them here as well. The Apple Watch is by far the most successful smartwatch out there, while AirPods promise to create a new category around the ears some have called “hearables”. More broadly, though, they’re part of a trend we’ll see in the coming years in which the functions of the smartphone will be increasingly delegated to other peripheral devices, whether merely as input and output devices in the short term or as powerful processors in their own right. These devices will over the next ten years increasingly take on tasks that smartphones have themselves taken over from other devices over the past ten years.

Microsoft HoloLens

I hesitate to include this device on this list, mostly because it’s far from being a mainstream product today and therefore isn’t really in the same category as the iPhone. But it’s perhaps the most high-profile example we have today of an AR headset, and that category as a whole does feel like it will be very important over the coming years in defining new interfaces, creating new markets, and generating tons of new value. More likely, though, it will be Magic Leap, Apple, or some other company which eventually brings a mass-market AR headset to market and truly creates a new category. For now, as I’ve written previously, AR will be dominated by the smartphone, but much of the work that’s done on smartphone AR will eventually be applicable to headset AR too. Much more than the Oculus Rift, which focuses on VR and therefore a smaller long-term addressable market, AR headsets feel like they’ll be a really important category ten years hence, even if the HoloLens doesn’t yet capture what that market will look like.

Google is MIA

One thing that struck me here is that no Google device is on the list – both Google and Microsoft have recently pushed into hardware, and while Microsoft’s HoloLens made my list with the caveats above, nothing Google has made yet has been anything other than just another entrant in an existing category. On the other hand, cloud services and the AI and machine learning that powers much of the next generation of those services will have a significant role over the next ten years, though no single product or service will have a massive impact.

Two other answers

However, ultimately, I think there are two other answers that are more compelling than any of the three I’ve just listed, and they are “the iPhone” and “none”. The reality is that the iPhone turned the smartphone into the biggest consumer electronics category the world has ever seen or is likely to see. The smartphone is going to become essentially ubiquitous around the world over the next few years, and no other product can hope to match that ubiquity, at least during the ten-year time horizon we’re talking about here. Voice speakers are a fascinating new category, and will grow significantly, but won’t be in a majority of homes for many years, and it’s smartphones that will continue to provide ubiquity for voice assistants. Accessories like smartwatches and bluetooth earpieces are just that – accessories to smartphones – and though they will take over smartphone functions as I described above, they will continue to meet the needs of subsets of smartphone users and be heavily tied to smartphones for the foreseeable future. Lastly, AR will be big in time, but again it’s through smartphones that the technology will have its broadest impact, while headsets serve a much smaller market even ten years from now.

As such, the iPhone and the smartphone market it inaugurated will continue to be the most influential over the next ten years, just as they were over the past ten. And no single new product in the market today will exert a comparable influence over the industry over the next ten years, even though we’ll see some fascinating new user interfaces, product categories, and changes in the way we all use technology and interact with each other and the world around us. As I’ve long argued, though, just as Apple shouldn’t shy away from new product categories because they can’t match the iPhone’s scale, neither should any other player in the market be cowed by the impossibility of matching the smartphone’s impact on the world. There are plenty of worthy places in today’s technology landscape to put effort and investment which will pay off handsomely in the coming years, and I’m looking forward to all the innovation that’s yet to come.

iPad Pro: You do You!

If you insist on looking at iOS 11+iPad Pro=PC you might miss the opportunity for this combo to live up to its full potential. I know for many PCs and Macs are synonymous of work and productivity, therefore my suggestion to start looking at the iPad Pro differently is missed on them. Yet, I promise you, there is a difference between wanting to replicate what you have been doing on a PC and wanting to understand if the iPad Pro can fit your workflow or even if it could help your workflow change to better fit your needs.

I have been using a 9.7” iPad Pro as my main “out of office” device since its launch. I do everything I do on my Mac or PC and some things are easier and some things are a little more painful but by and large, it serves me well.

I upgraded to a 10.5” iPad a couple of weeks ago and it has been business as usual. I enjoy the extra screen real estate and I struggled a little to get adjusted to the larger keyboard as my fingers had a lot of muscle memory in them that was generating typos. I did not use Pencil more despite the fact that, thanks to the new sleeve I was not forgetting it at home as often as I used to.

After 24 hours with iOS 11 and iPad Pro, it became apparent to me that the range of things I could do grew and so did the depth I could reach. These are not necessarily tasks I was performing on my Mac or PC and when they are, they are implemented in a slightly different way as the premise on iPad is touch first.

Let’s take a Step Back

Before I moved to the iPad Pro I had to embrace the cloud. This step was crucial in empowering me to use the best device for the job at any given time. When I travel, mobility trumps everything else. Going through a little pain that a smaller screen and keyboard imply is well worth the advantage of cellular connectivity, instant on, all-day battery and the ability to dump all in one purse.

What does a normal day at the office entail for me? Well usually I engage in most of the following: reading articles, reports, papers and books, writing, social media interactions, listening and recording podcasts, email, messaging, data analysis and creating or reviewing presentations.

I could perform all of those tasks on an iPad Pro as well as on a MacBook and a PC. What differed is which task was best executed on each device. Anything touch first was better on my iPad Pro or on my Surface Pro as was anything that supported pencil or inking. The MacBook Pro and Surface were slightly better with Office apps but mainly because of the larger screen and the better keyboard. The iPad Pro still offered a better balance of work and play thanks to the larger ecosystem and better apps and partly because Surface is held back by a Windows 10 jargon that makes it walk and talk too much like a PC.

iOS 11 Brings Richness to the iPad Pro

This is not a review of iOS 11 so I will not list all that is new with it but I will point to the features and capabilities that iOS 11 offers that struck me as changing the way I work.

Files
Adds freedom to my cloud-first workflow allowing me to live in a multi-cloud provider environment which was possible before but not without any pain.

The New Dock, Slide Over, and Split View
Make for a faster, richer multitasking environment that you appreciate when you always must have your eye on social media or you are creating charts or sieving through data when writing a report.

Drag and Drop
This is possibly the best example of a feature that despite sharing the same name on the Mac is made zillion times better by touch. It turns something that is cumbersome to do with the mouse in something super intuitive.

Instant Markup, Instant Notes, Scan and Sign
Despite still preferring the Surface Pen I finally see myself being able to integrate Pencil in my workflow. I read many reports and I used to print them out and annotate them, highlight them and then take pictures of them so that I would not file them somewhere safe and never see them again. All those steps are now condensed for a much more efficient and equally productive experience. In this case, it is not about being able to do something I was doing on my Mac. It is, instead, the ability to fully digitize a workflow. It also allows Apple to catch up with inking on Surface – and I specify Surface as I have not found another Windows 10 2in1 that offers the same richness of experience.

QuickType Keyboard
I am still not a fan of the physical iPad Pro keyboard. I do not like the texture of the material and the lack of backlight limits the usefulness on planes and in bed, sadly two places where I often work! Because of that, my default has always been the digital keyboard and the new update makes it a breeze to touch type on iPad Pro both for speed and accuracy.

Screen Capture and Screen Record
We have already started to experiment with the video record function to share an interesting chart with some live commentary. This is something we could have done in the past but required specialized apps and a pretty convoluted process. Screen Capture coupled with Instant Markup also offers a new way to interact with content that Samsung Galaxy Note users have already been addicted to.

All new features that will make my work on iPad not just more efficient but more pleasant because it better fits me. I am sure this will result in more time spent on iPad when I am in the office not just when I am on the go.

The Best Tool for the Job

If you think about other technologies innovations there is always a degree of compromise at least for a given period. Think about the clarity of a voice call on a fixed phone vs. a DECT phone and then a cellular one. We were happy to sacrifice quality of sound for the freedom to walk around the home first and always have a phone with us later on. The same can be said about fixed broadband vs. mobile broadband.

I feel though that with computing we are getting to the point of not having to compromise as long as we do not let habits hold us back and we feel empowered to reinvent our workflows. Millennials and Gen Z have the advantage of not suffering from the limitations muscle memory imposes, but there is hope for us old dogs too.

Worrying about installed base has destroyed companies such as Nokia and Blackberry and has held back Microsoft. Apple has had the advantage to have a very loyal and forgiving installed base of users who gave them the benefit of the doubt when they started to experiment with computing. Microsoft has stepped up and with Surface has been able to deliver a richer experience that comes from a deeper integration of software, hardware, and apps. Yet, while the goal of these two giants seems to be very much aligned, I cannot help but wonder if Microsoft decision of going all in with Windows 10 will always hold them back somewhat vs. an Apple that has chosen a two-pronged approach to computing.

For me Surface Pro today is the best productivity device on the market but it’s being held back to be a true creativity device by Windows 10. As a user who wants to both be creative and be productive iPad Pro is the choice for me today. I am, however, keeping my eyes on Windows 10 S as Microsoft’s opportunity to create a two-pronged strategy that frees them of the legacy ball and chain.

Business Realities vs. Tech Dreams

Never underestimate politics.

No, not the governmental type, but the kind that silently (or not so silently) impacts the day-to-day decisions made in businesses of all sizes, and personal relationships of all stripes.

Even in the seemingly distinct world of technology purchasing, there is often a surprisingly large, though not always obvious, set of key influences that come from decidedly non-technical sources and perspectives.

In fact, one of the more interesting things you realize, the more time you spend in the tech industry, is that good technology is far from a guarantee for product or market success. Conversely, while there are certainly exceptions, a large percentage of product or even complete business failures comes from factors that have little to do with the technologies involved.

Business realities, organizational politics, industry preferences, existing (or legacy) hardware, software, and even people, as well as many other factors that you might not think would have an influence on buying decisions, often are way more important than the technology itself. Unfortunately, there seems to be quite a few people in tech who don’t recognize this, and a lot of them only seem to learn this the hard way.

From great startup ideas to innovative product incarnations from existing players, the number of new products that are thrown out into the world with the thought that the technology is good enough to stand out on its own is still surprisingly high. While I can certainly appreciate this nearly slavish devotion to the disruptive potential that a great technology can have, it’s still kind of shocking how many ideas get funded or get supported with little practical reality for success.

In part, this speaks to the staggering amount of money being lavished on tech-focused entrepreneurs these days thanks to the influence that technology companies are having even on very traditional industries. From the influence of IoT in manufacturing or process industries, to the rewriting of the rules for something as basic as retail groceries, the reach of the tech industry and people involved with it has grown surprisingly wide. As a result, there’s an enormous amount of money being tossed towards tech initiatives, but some of it appears to be done without much thought. Put another way, there sure seems to be a lot of stupid money in tech.

Of course, another reason is that accurately predicting major tech trends has proven to be a challenging exercise for most everyone. For every app store concept blazing a trail of new business opportunities, there’s a lot of 3D TV-like concepts strewn across the side of the road. Given that reality, it certainly makes sense to hedge your bets across a wide range of product and technology concepts to make sure you don’t miss a big new opportunity.

At the same time, companies (and investors) need to spend more time thinking through the tangential, historical, political, social, and yes, personal impacts of a new product or technology before they bring it to market. Arguably, there should be even more time spent on these non-technical aspects than the technical ones, but few companies are willing to make the effort or do the necessary research to really understand these potentially crippling issues.

With enterprise IT-focused products, for example, if a new offering has the potential to improve efficiencies for a given process or department but does so in a way that potentially eliminates the jobs of people in that department, it often doesn’t matter how conceptually cool the technology is because it’s going to hit resistance from existing IT personnel. In fact, some of the biggest challenges in trying to deploy ground-breaking new technologies in businesses are people problems (i.e., political), not technology ones.

In the case of a hot new technology like IoT, it’s not uncommon to find different groups with a particular vested interest within an organization getting into “turf wars” when a new product or technology consolidates previously distinct business segments or departments. Gone are the days when the only part of a business that buys tech-related products is the IT person or department—the lower cost and ease of use of many new tech products and services have democratized their reach—so the potential for these kinds of technological land grabs grows every day.

In the consumer world, the influence of “legacy” products, tech “fashion”, and other non-technical factors can be much bigger than many realize when it comes to consumer purchase choices. Whether it’s the desire or need to work with products that people already own, or a predilection (or disdain) for particular brands, these other non-tech issues are even more important to consumers than they are to business technology buyers.

The bottom line is that the tech purchase process for both businesses and consumers is far from the ivory tower, purely rational set of comparisons that many in the tech industry presume it to be. And, as tech further extends its influence across a wider range of our personal and professional lives, that separation from a simple rational analysis is likely to grow.

Great technology will always be important, but seeing that technology and its potential impact in the right context, and understanding how it may, or may not, fit into existing environments will be an increasingly important factor in determining the ultimate success or failure of many new ventures.

How the iPhone impacted Five Major Industries

On June 29th, Apple will celebrate the 10th anniversary of shipping the iPhone. Although the iPhone was introduced at MacWorld in January of 2007, the iPhone did not actually ship until the latter part of June of that year. I was lucky enough to get a preview of the iPhone the day before it was introduced at MacWorld and Apple SR VP of Marketing Phill Schiller put the iPhone on a coffee table and asked me what I saw.
I told him I saw a piece of glass in a metal case. He told me that is what Apple’s wants you to see. In off mode that is exactly what it is. But once turned on, that is where the magic is. Apple sees themselves as a software company first and creates devices, like Mac’s, MacBooks, iPods, iPhones, Apple TV and Apple Watch to run their innovative software.

Before the iPhone was released there was a lot of hype around the iPhone; It was even nicknamed the “Jesus” phone as some felt it would be miraculous. At the time none of us believed it could live up to the hype but to our surprise this time the hype was correct and indeed the iPhone turned out to be a new powerful technology that has impacted the lives of hundreds of millions of people around the world. For almost all, it has changed the way they communicate, work, learn and play.

But perhaps the most surprising thing about the iPhone, besides how it has become the most important technology most of us have with us all of the time, is the impact it has had on five major industries.

The first industry it has upended is the PC market. Until the iPhone shipped, we were selling around 400 million PC’s a year. But as the iPhone and, smartphone’s in general, have become critical tools for information and used for both productivity, voice, texting, and pleasure, the PC has become less important to many people. Until the mobile revolution that came with the iPhone, the only way people could get onto the Internet was from a PC or laptop. Today, thanks to the iPhone, iPad and the Android equivalents that basically copied Apple’s designs, people have much more options to make the connections they need no matter where they are. Consequently, the PC industry is now shipping only about 275-290 million PC’s a year and this has caused a level of industry consolidation that is now concentrated around mainly Lenovo, HP, Dell, Acer, and Apple. What Apple did that really impacted the PC market is it put a PC in your pocket.

The second industry that was impacted was Telecom. Before the iPhone, ATT, Verizon and most of the original telco’s business models were around voice. Yes, VOIP became popular by 2000 and had already started pushing them to move to digital voice instead of traditional land-line voice delivery methods, but with the iPhone, it pretty much forced them out of the traditional voice business altogether. Just try and find a pay phone today compared to the millions of payphone’s that were in place in 2007.

Now, all of the telco’s are data communications companies who have a totally different business models compared to what they had in 2007. And all of them have added to their digital communications business things like information services, entertainment services etc and all are now a conduit for supplying data services of various types to their customers.

The third industry the iPhone turned on its heels is the movie and TV industry. For most of my life in order to watch a movie I had to go to a movie theater and to watch a TV show I had to sit in front of my television at home. But the iPhone created a mobile platform for video delivery and since 2007, every major movie and TV studio has been forced to expand their distribution methods to include streaming services to both fixed devices like a TV and mobile devices. However, it was the millions of iPhones out in the field that was capable of letting people watch video anytime and anywhere that forced these studios to move in this direction. It also spawned new types of video services like Youtube and even Netflix, Hulu and others have become video powerhouses in which at least 50% of their content is viewed on some type of mobile device.

The fourth industry the iPhone impacted has been the gaming industry. Before 2007 most games were either delivered on a game console, a PC or a dedicated handheld device like the Nintendo and handheld game players. But the iPhone expanded the market for games and now almost every game, unless it is highly video driven or maximized for use on dedicated consoles or high-end PC gaming systems, are now available on the iPhone or some type of mobile platform like a tablet.

As a result, the gaming market, which in 2007 was a rather narrow market, has now expanded to one that allows hundreds of millions to play games on their iPhone or smartphone equivalent, something that was not possible in 2007.

It has also impacted the health industry. Today one can use an iPhone to monitor various health issues as well as giving people ways to get access to their health information, make a connection with their health professionals and even get health advice anytime and anywhere they happen to be. Only recently have we started to see how a smartphone impacts the health industry and we will see its role expand as this industry embraces the smartphone for outpatient care.

But perhaps the biggest impact it has had is on Apple itself. Before the iPhone, Apple was known as Apple Computer. Today it is Apple Inc, a company that makes much more than a computer. And the iPhone accounts for over 60% of Apple’s total revenue now and is bringing in record revenue each year. Apple is on track to potentially be the first trillion dollar company and is already the most valuable company on the planet.

Looking back over these 10 years, the hype before the launch of the iPhone underestimated what the iPhone could do and its eventual impact on industries and individuals. And with Apple on track to define and grow AR in Mobile, the iPhone and Apple seems to be ready to make the iPhone even more important to our digital lifestyles and most likely will impact other industries in ways we have not even thought of today.

Surface Laptop and Microsoft’s Hardware Long Game

I’ve been testing Microsoft’s recently launched Surface Laptop, and it’s an extremely well-designed piece of hardware. Microsoft seems to have obsessed over every detail in its first laptop, save perhaps the shipping OS (Windows 10S), and the result is a product that an awful lot of people are going to like. The Surface Laptop is also notable because even at its relatively high starting price of $999 it has the potential to drive shipment volumes well beyond what Microsoft has seen before, and it represents yet another step forward in the company’s slow-but-steady move towards becoming a hardware heavyweight.

World-Class Hardware
Most reviewers are infatuated with the Alcantara-covered keyboard, but they key design element for me is the touch-enabled 13.5-inch, 3:2 aspect ratio screen. It mimics the ratio of Microsoft’s Surface Pro and Surface Book, and I like it because it offers more vertical screen real estate than your typical widescreen (16:9) aspect ratio notebook. It’s not an OLED, but it is still beautiful with a 2256 by 1504 resolution and 201 pixels per inch.

Microsoft has been honing its out-of-box experience since the first Surface tablets shipped, and here again, the company brings a top-notch experience augmented by Cortana during setup. Using just my voice I was able to get through much, but not all, of the initial setup. It’s a smart way to remind people that Cortana is on Surface, and getting more useful all the time.
Fit and finish is quite good, the Alcantara does feel nice (longevity TBD), and the touchpad is large and highly responsive. Microsoft’s decision to forego USB Type-C ports feels uncharacteristically backward looking but is undoubtedly rooted in user feedback. Finally, the Windows Hello face sign-in camera on the notebook is amazingly fast. Once you get used to signing into your notebook this way, even a lightning-fast fingerprint scan seems old school.

Windows 10S: The Challenge of Subtraction
While I have very few complaints about the Surface Laptop hardware, I’m afraid my take on Windows 10S is less charitable. Some have called this a pared-down version of Windows 10, but that’s not exactly accurate. It’s the same Windows 10, just more restricted. I fully understand what Microsoft is trying to accomplish here, and respect it. By only allowing us to install apps from the Windows Store, Microsoft says it can ensure a fast, stable operating system that won’t face the inevitable slowdown that occurs when you install (and uninstall) legacy Windows apps. The problem is that many of the legacy apps that longtime Windows users depend upon will never make it into the Windows Store. For me, the inability to run a third-party browser and my company’s use of proprietary software means Windows 10S is a non-starter for me. Happily, Microsoft lets Surface Laptop users switch to Windows 10 Pro for free (at least through the end of the year). I’ll be making that switch, immediately. Near-term Windows 10S might make sense on low-cost education hardware competing with Chrome. Long term, to find mainstream acceptance, Windows 10S will require a much more robust offering within the Windows store.

High-Value Vs. High Volume
With the launch of the Surface Laptop Microsoft now has a very well-rounded hardware lineup. It joins a newly refreshed Surface Pro, the high-powered Surface Book, and the creator-focused Surface Studio desktop. With the launch of each new piece of hardware, Microsoft has further burnished its reputation for shipping high-end, well-designed hardware. Rather than focusing on selling high volumes, the Surface team is clearly focusing on selling high-value. And by focusing first on the nascent detachable space, Microsoft built a sizeable hardware business without taking share away from partners such as Dell, HP, and Lenovo, who only entered the space after Microsoft helped establish it. However, looking at IDC’s first quarter 2017 detachable numbers brings to light an interesting detail: While Microsoft’s share of the category has dropped as the market has grown, it still has by far the highest average selling price (ASP) among top vendors at nearly $1,200. So its 15.1% unit market share drives a healthy 27.7% of the detachable market’s total revenues. (Apple’s iPad Pro is number one in units with 32.5% of the market and 38.6% of revenues; Samsung is third with 10.3% of the unit market share but just 7.8% of the revenues). Meanwhile, nobody expects Surface Studio to ship tens of millions of units, but with a price range of $2,999-$4,199, it’s certainly going to drive some enviable ASPs.

Windows 10S challenges aside, Surface Laptop could drive decent unit volumes for Microsoft, especially if the company successfully utilizes its retail stores. And while the starting price may be $999, few will settle for this entry level product, which means ASPs will be higher (my test system sells for $1299; a maxed-out system is $2,199). Among the top five notebook vendors in Q1 2017, only Apple had a notebook ASP North of a grand ($1,560), while the rest landed in the high $500s to low $700s. Volume is the name of the game for most of these players. But market watchers and competitors should pay close attention to how well Surface Laptop does over the next 18-24 months. Microsoft may have just fielded a laptop line that will eventually grab a small piece of the overall share of the notebook market, but an outsized chunk of the the revenue pie.

How the Cryptocurrency Gold Rush Could Backfire on NVIDIA and AMD

The effects of the most recent cryptocurrency mining phase are having a direct impact on various markets, most notably on the GPU product lines from NVIDIA and AMD. Without going into the details of what a cryptocurrency is or how it is created and distributed on a shared network, you only need to understand that it is a highly speculative market and gold rush that is accelerated and profitable because of its ability to run efficiently on graphics cards usually intended for the PC gaming markets. Potential investors need only purchase basic PC components and as many GPUs as they can afford to begin a mining operation with the intent to turn a profit.

As we look at the sales channels today, AMD Radeon graphics cards from the current and previous generation of GPU are nearly impossible to find in stock, and when you do come across them, they are priced well above the expected MSRP. This trend has caused the likes of the Radeon RX 580, RX 570, RX 480, and RX 470 to essentially disappear from online and retail shelves. This impact directly hit AMD products first because its architecture was slightly better suited for the coin mining task while remaining power efficient (the secondary cost of the mining process). But as the well dries up around the Radeon products, users are turning their attention to NVIDIA GeForce cards from the Pascal-based 10-series product line and we are already seeing the resulting low inventory and spiking prices for them as well.

Positive Impacts

For AMD and NVIDIA, as well as their add-in card partners that build the products based on each company’s GPU technology, the coin mining epidemic is a boon for sales. Inventory that might have sat on store shelves for weeks or months now flies from them as soon as they are put out or listed online, and reports of channel employee-driven side sales are rampant. From the perspective of this chain, GPU vendor, card vendor and reseller, a sale of a card is never seen as a negative. Products are moving from manufacturers to stores and to customers; the goal of this business from the outset. Cryptocurrency has kept the AMD Radeon brand selling even when its product stack might not be as competitive with NVIDIA as it would like.

This trend of GPU sales for coin mining is not going unnoticed by the market either. Just today a prominent securities fund moved NVIDIA’s stock to “underweight” after speaking with add-in card vendors about stronger than expected Q2 sales. AMD’s stock has seen similar improvement and all relevant indicators show continued GPU sales increases through the next fiscal quarter.

Negative Impacts

With all that is going right for AMD and NVIDIA because of this repurposed used of current graphics card products lines, there is a significant risk at play for all involved. Browse into any gaming forum or subreddit and you’ll find just as many people unhappy with the cryptocurrency craze as you will happy with its potential for profit. The PC gamers of the world that simply want to buy the most cost-effective product for their own machines are no longer able to do so, with inventory snapped up the instant it shows up. And when they can find a card for sale, they are significantly higher prices. A look at Amazon.com today for Radeon RX 580 cards show starting prices at the $499 mark but stretching to as high as $699. This product launched with an expected MSRP of just $199-$239, making the current prices a more than 2x increase.

As AMD was the first target of this most recent coin mining boon, the Radeon brand is seeing a migration of its gaming ecosystem to NVIDIA and the GeForce brand. A gamer that decides a $250 card is in their budget for a new PC would find that the Radeon RX 580 is no longer available to them. The GeForce GTX 1060, with similar performance levels and price points, is on the next (virtual) shelf over, so that becomes the defacto selection. This brings the consumer into NVIDIA’s entire ecosystem, using its software like GeForce Experience, looking at drivers, game optimizations, free game codes, inviting research into GeForce-specific technology like G-Sync. For Radeon, it has not lost a sale this generation (as the original graphics card that consumer would have bought has been purchased for mining) but it may have lost a long-term customer to its competitor.

Even if the above problem fades as NVIDIA cards also become harder to find, NVIDIA has the advantage of offering current generation, higher cost products as an option to PC gamers. If a user has a budget of $250 and finds that both the GeForce and Radeon options are gone to the crypto-craze, NVIDIA has GeForce GPUs like the GTX 1070 and GTX 1080 that are higher priced, but more likely to be at their expected price point (for now). AMD has been stagnant at the high end for quite some time, leaving the Radeon RX 580 as the highest performing current generation product.

Alienating the gaming audience that maintains both Radeon and GeForce from year to year is a risky venture, but one that appears to be impacting AMD more than NVIDIA, for now.

Other potential pitfalls from this cryptocurrency market come into play when the inevitable bubble reaches its peak. All mining operations get more difficult over time, on the order of months, and make the profitability of mining coins much lower and requires significantly more upfront investment to turn a profit. The craze surrounding mining is driven in large part by the many “small” miners, those that run 10-30 cards in their home. Once the dollar figures start dropping and the hassle and cost of upkeep becomes a strain, these users will (and have in the past) halt operations.

This has several dangers for AMD and NVIDIA. First, inventory that may be trying to “catch up” to the cryptocurrency mining levels of sales rates could be caught in the line of fire, leaving both GPU vendors and their partners holding product in their hands than they cannot sell. Second, the massive amounts of hardware used for mining purposes will be found on the resale markets like eBay, Amazon, and enthusiast forums. Miners no longer interested in cryptocurrency will be in competition now to sell the RX 580s they have amassed as quickly as possible, dropping the value of the product significantly. If AMD or NVIDIA are in a roll-out mode for a new generation of product at that time, that means new product sales will be directly impacted as slightly older hardware at a great value is suddenly available to that eager gaming audience.

As for a more direct financial risk, both company’s stocks risk corrections when this mining bubble breaks down.

The disappointing part of this situation is that neither AMD or NVIDIA can do anything to prevent the fallout from occurring. They could verbally request miners leave products for gamers, but it would obviously stop nothing. A price hike would only hurt the gaming community more as miners are clearly willing to invest in GPUs when they are used for profit. And trying to limit mining performance with firmware or driver changes would be thwarted by an audience of highly intelligent mining groups with re-flashes and workarounds.

The rumors of both vendors offering mining-specific hardware appear to be true, selling headless (without display connectors) graphics cards is perfect for crypto mining and makes them unusable for gaming. This allows NVIDIA and AMD to use previously wasted GPUs that might have had a fault in the display engine for example. But would not be enough of a jump in inventory to open standard cards for gamers. If anything, the mining community would simply swallow that as well.

The cryptocurrency market may not be a bubble, but the GPU-based mining operations that exist today certainly are. And the long-term impact that it will have on both AMD and NVIDIA will be a negative one. For today, all parties involved will enjoy high sell through, increased ASPs, and happy investors. But the writing is on the wall from previous instances of this trend to know that there will be fallout. The question is only how much it will impact future product and which GPU vendor is capable of balancing current benefits with long-term detriment.

Retailers Play a Key Role in the Success of Smart Homes

Connected home products are grabbing floor space and early tech adopters’ attention. Sales are growing, and big brands are investing more and more. But moving from early tech adopters to the mainstream will not just be about lower prices. A better shopping experience is a must when consumers are still confused about what works with what and the overall benefits of a connected home.

Tech savvy consumers know what they want. They have researched the product category, they read tech reviews, they asked friends, and they are happy to purchase online. Tech-savvy buyers are also glad to go through any pain the set-up of a device might bring. They see the pain as part of the process of being early tech users. It’s their duty to pave the way for the masses.

Mainstream consumers, on the other hand, want a pain-free setup and most of all a worry-free purchase experience. In our research into early connected home adoption, mainstream consumers expressed the need to have someone to go to in a store and the peace of mind that if something went wrong, they can bring the device back to the store and talk to a human. In our focus groups, consumers seemed to prefer home-improvement stores to electronic stores mainly because that is how they see these connected devices. A connected bulb is still a bulb!

Sadly though, if you go to a Home Depot or Lowes you are left facing a bunch of connected products lump together on a shelf with very little information on what they do let alone of the experience they can deliver.

It’s about the Experience, not the Specs

Whenever I play mystery shopper, I am faced with a high degree of ignorance on the topic of smart accessories. Most sales assistants know about specs and what is spelled out on the box, but unless you have someone who went through their own set up at home, it is rare to talk about an experience. Yet, I find that when you can envision what a particular device can do for you the sale is much easier.

Last week I moderated a panel on ambient computing at the Target Open House in San Francisco, and I was pleased to see how it had evolved since I first visited it after its grand opening over a year ago. The space gives the ability to potential buyers to see products in a large room called the Playground as well as walk through a living room, a bedroom and a garage to experience some of these in a home context. Target has 500 stores across the country that have smart products displayed in context.

While, as you can expect, the experience is still quite show-roomie, it does attempt to deliver an experience. What I liked is how Target focused on guests’ personality and preferences rather than the products. So, for instance, if you are a sports enthusiast or a music lover they show how your living room can be optimized for your ultimate viewing or listening experience. I think this is interesting as it attempts to put the consumer first rather than the product. In other words, it is about helping you find the products that deliver what you want instead of telling you about products and let you discover how they fit into your life once you get them home.

A few months ago, I spent some time in a model home that was installed with HomeKit compatible products. Needless to say, the experience was pretty compelling as there is nothing more convincing than sitting on what could be your own sofa and open the door to a guest, lower the blinds to have the perfect light to watch TV and set the temperature in the room. Over time this will become the norm for buyers of new homes. I expect you will be able to pick a Siri, Google Assistant, Cortana and Alexa home. For now, however, not every vendor in the market can have a real life home to welcome potential buyers, so store experiences are important. Your average consumer is also not necessarily going to attend a home show where many of these solutions have been showcased this far.

Interestingly, setting up experience rooms is how large TV and projectors are displayed in electronic stores. If you walk into a BestBuy you will quickly find the room with the cinema chairs and the projector or the large screen TV that disappear behind a portrait above the fireplace in front of the couch or the speakers that are disguised as rocks for your patio. Showcasing video and audio solutions in a real-life setting has been done for years, yet showcasing a connected home is not something that retailers are rushing into and I think it is because the opportunity is more limited for them at least for now. It might all boil down to how many connected devices will I need to sell to equal the sale of a $7000 video projector?

Smart Experience Showcases Can Help Retailers and Brands

In this early stage of the connected home, it is not just consumers who need help in buying. Brands too need help in selling. Information on what message resonates with consumers, what features close the deal, what is the job to be done…Retailers can help with that information when they set up a smart environment. Target Open House, for instance, has sensors that connect information on foot traffic, product views and likes, touches on digital screens. Information is collected about sales and direct feedback shared with the team of experts who work in the house, and the information is used to decide what products should be displayed in the Playground area as well as what may make sense to sell at Target stores nationwide. Some of the insights are also shared with companies on the shelf to help them understand how guests are experiencing their products.

Big data is such a trend in tech right now that retailers should start talking more about what kind of data they are prepared to share with brands. This can be a competitive advantage in securing product exclusives, and co-marketing spend.

A Platform for Smaller Brands

The connected home space is benefitting from Amazon, Google and Microsoft, opening up Alexa, Google Assistant and Cortana respectively to be integrated into different ways into apps and devices. While Apps have an easy go to market through apps stores, most device manufacturers still need a distribution channel that is online or in store. Kickstarter and Indigogo can help startups to get to market but once there getting noticed might be harder than they thought.

Target Open House offers startups a stage through their Garage space where a dozen of products at the time are showcased before they get to market. Some of the products that guests are particularly excited about and offer a somewhat unique proposition are then moved to the Playground area and on Target’s shelves.

Other stores should follow in the steps of Target and offer a stage for startups especially if local. A community-feel always speaks to consumers, look at how popular farmers market and farm to table restaurants are!

A Connected Home is not built on One Device Alone

Connected homes in their true sense of home automation are complicated concepts that will take years to develop fully. They are also going to be quite different from one home to another. Some consumers might like to be in a single brand home; others will like to pick best of breed brands in the many areas they will decide to connect. Experiencing that home will matter to all but especially to the ones who will pick and mix. This is why experiencing the best way one can, what how technology changes your home is important. While consumers today think about it regarding home improvement I believe that home design will also play a key role in shaping the connected home. Maybe over time Pottery Barn rather than Home Depot is where consumers will turn.

The Power of Hidden Tech

The tech world is dominated by some of the most powerful brands in the world. Companies like Apple, Amazon, Google, Facebook, Netflix, Intel, Samsung and others are featured in the mainstream and business media as much, if not more, than the industrial giants of old. In fact, they’ve become common household names.

They’ve earned their solid reputations through successful products, hard work, and their ability to deliver the kinds of financial results that have made them the darlings of the investment community too.

As impressive and powerful as this group may be, however, they certainly aren’t the only companies in tech doing important work. Though it’s easy to forget, there’s an enormous number of lesser-known tech players that are helping to enable the amazing tech-driven advances that we all enjoy.

At the core, there is an entire range of companies creating the semiconductor chips that sit at the heart not only of our connected devices, but the servers and other infrastructure that enable the cloud-based services to which we’ve all become accustomed. Companies that offer the designs and intellectual property that are used in chip designs, most notably UK-based ARM, but also Synopsys and Imagination Technologies, play an extremely important, but often overlooked, role in driving the modern architectures behind everything from IoT to VR and AI.

Another often ignored step in the chain is for test and measurement technologies. Lesser-known companies like National Instruments are helping drive the components, core technologies, and final products for everything from 5G radios to autonomous cars to industrial IoT and much more.

In semiconductor chips and other components, you have big names like Qualcomm and Nvidia, but there is an enormous range of lesser-known companies building key parts for all kinds of devices. From Texas Instruments (TI) and Renesas in automotive, to Silicon Labs for home networking, to South Korea-based LG Philips Display and Taiwan-based AUO for displays, to Synaptics for fingerprint readers, there’s a huge ecosystem of critical component suppliers.

Even some of the bigger names in semiconductors are branching off into new areas for which they aren’t commonly known. Later today, for example, AMD will be formally unveiling the details of its Epyc server CPU, the first credible threat to Intel’s dominance in the data center in about 10 years. Not to be outdone, Intel is making significant new investments in AI silicon with Nervana and Mobileye for connected cars. Qualcomm’s audio division—part of their little-known acquisition of Cambridge Silicon Radio (CSR) a few years back—just unveiled a complete suite of components and reference designs for smart speakers, like Amazon’s Echo.

In addition to hardware, there is, of course, a huge number of lesser-known software players. Companies like VMWare and Citrix continue to drive cloud-based computing and more efficient use of local data centers through server and application virtualization and other critical technologies. Application development and delivery in the enterprise and in the cloud is being enabled by Docker, a company that offers the ability to split applications into multiple pieces called containers, that can be virtualized, replicated, and much more.

Vendors like Ubuntu are not only enabling user-friendly Linux-based desktops for developers and other enthusiasts, they are also offering powerful Microsoft OS alternatives for servers. In the case of software-defined storage and hyperconverged infrastructure (HCI) server appliances, companies like Nutanix, Pivot3, and others are enabling entirely new software-defined data centers that promise to revolutionize how computing power is created and delivered from public, private, and hybrid clouds.

Though they will likely never get the kind of recognition that the big brand tech players do, the products, technologies, and contributions of these and thousands more lesser-known tech companies play an incredibly critical role in the tech world. By driving many of the key behind-the-scenes developments, these types of companies provide the efficient, safe, and effective tech products and services that have enabled the bigger brands to become such an essential part of our daily lives.

Technology and Human Augmentation

One of the core premises of our research is to understand technology from a deeper human level. We too often get caught up in the technology itself and may lose sight of the basic human needs or desires technology is serving. With all the tech of Artificial Intelligence, Augmented Reality, and any number of other buzz words, I sense the human angle is again being lost while we chase technological advancements for the sake of the technology rather than the sake of the human.

To frame my perspective, I think it is helpful to use the idea of human augmentation as a basis for our understanding of how technology serves humans and will always do so. The core definition of augment is to make something greater by adding to it. Using this framework from a historical perspective, we can observe how nearly every human technological invention was designed to augment a fundamental weakness of human beings. Tools were invented to augment our hands so we can build faster, bigger, more complex things. Cars were invented to augment the limitations of the distance humans can travel. Planes were invented to augment humans lack of ability to fly. The telephone was invented to augment the limitations of human communications. Nearly every example of technological innovation we can think of had something to do with extending or making greater some aspect of a human limitation or weakness. This was true of historical innovation, and it will be true of future innovation as well. Everything we invent in the future will find a home augmenting some shortcoming of our human bodies. Technology, at its best, will extend human capabilities and allows to do things we could not do before.

While we can analyze many different angles in which technology will augment our human abilities, there is one I think may be one of the more compelling things to augment—our memory.

Memory Augmentation
My family and I took a recent vacation to Maui. It is always nice to get out of the bubble of Silicon Valley for a more natural atmosphere to observe human behavior and technology. Going to a place where most people are on vacation provides an even deeper atmospheric layer to observe.

On vacation, I saw how critical and transformative the smartphone camera has been when it comes to memory augmentation. I’ve long thought that one of technologies greatest values to humans is in the assistance of capturing memories. For sure, this is the single driving motivation behind most people purchasing of digital cameras and video cameras through the years. Now with most people in developed markets owning a memory capture device, and comparable apps on their smartphones to enhance these memories, observing memory augmentation is now a frequent activity.

It was fascinating to see the lengths people on vacation would go through with their phones, drones (I was surprised how many drones I saw), GoPro’s, waterproof smartphone cases, and more to capture and preserve their memories.

I saw people climb trees, brave cliffs, and hike extreme conditions with their phones to get a unique selfie. Fly their drone overhead as they jumped off waterfalls. Put their phones in waterproof cases to get pics of kids snorkeling. And obviously, lots of uses for GoPro’s to capture unique photos and videos of undersea creatures and experiences.

As often was the case, most of the memories captured are designed to share on social media, but the point remains, these pervasive capture devices enable us to create and capture memories we would most likely forget, or have a hard time recalling if left to our memory.

I’ve argued before the camera sensor is, and will remain for some time, one of the most important parts of our mobile computing capabilities. The desire to preserve, or capture a unique memory will remain a deeply emotional and powerful motivator for humans.

Allowing technology to take this idea a step further we have things like Apple Photos and Google Photos which look over our memories and make short videos to not just augment but to automate our memory creation process. As machine learning gets even better, these technologies will make creating memories from moments even easier.

As technology continues to augment more and more of our human capabilities my hope is that the technological tool or process involved will fade so deeply into the background that it nearly disappears. This way we can get the most out our time whether at work, school, play, or vacation, and spend less time fidgeting with technology. Ultimately we will be able to do more with technology but also spend less time with the technology itself and more time doing the things we love.

Podcast: Microsoft Surface Laptop, Windows 10S, iPad Pro, Amazon and Whole Foods

This week’s Tech.pinions podcast features Carolina Milanesi, Tom Mainelli and Bob O’Donnell discussing Microsoft’s new Surface Laptop and Windows 10S, the Apple iPad Pro and some of Tim Cook’s comments in his Bloomberg interview, and Amazon’s purchase of Whole Foods.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Five Internet Companies That Need Better Consumer-Facing Customer Service

A year ago, when Google announced an aggressive push into the consumer hardware business, I wrote that the company needs a better consumer-facing customer service infrastructure. The column was published in Recode and received quite a bit of attention.

I’ve been thinking about some other consumer-oriented Internet companies and brands that also need to improve their customer service. My bias is toward actually being able to talk to a human being, in real-time, by phone or via live chat. Because sometimes, in certain situations, the miasma of e-mail, help forums, Zendesk and the like, just doesn’t cut it. A common approach of many Internet companies is to shift the burden of customer support to the customers themselves, which means that Mary from Kentucky might be telling you how to connect your bank to Mint.

Companies that do this well — Amazon, Apple, Netflix, and even some of the cellular operators such as T-Mobile — have higher levels of customer satisfaction and loyalty. Some have shown marked improvement (Dell, Microsoft), while others, such as some of the airlines, have started to use Twitter fairly effectively, especially during times of high call volume.

So, here are five companies in the B2C realm that need to make improvements in their customer service infrastructure.

Mint. If you read the help forums, the tens of millions of people who use this web-based personal finance management service have a love-hate relationship with the company. Mint has email and the Zendeskian support site, but there is no way to actually talk to a human being at Mint. The types of problems and questions that can come up – bank can’t connect, wacky duplicate entries, transactions that suddenly get lost – require an immediate and often quick discussion and not the multi-threaded email that can stretch out over several days. Curiously Mint is owned by Intuit (Quicken, TurboTax), so there’s no shortage of customer support infrastructure there. Perhaps they can dispatch some of that army of folks who staff the support lines at TurboTax during the “off-season”!

Uber and Lyft. If you ever have a problem with one of these popular ride-sharing services, you might be wistful for that cranky local taxi dispatcher you used to call when the cab didn’t show up. Because unless it’s a real emergency, there is basically no way to contact a human customer support person at Uber or Lyft. If one uses these services with some frequency, there will inevitably, at some point, be an issue with an incorrect fare, being charged for a canceled trip, etc. If there’s ever an actual dispute, web/email is the only recourse – some nameless person (maybe even a robot?) is judge and jury, and there’s little opportunity for any back and forth. There are some situations where one needs to be able to talk to a person to provide some background and context. Uber and Lyft should do better here.

Airbnb. Sensing a theme? The vaunted ‘sharing economy’ operates lean and mean when it comes to customer support. Now, it is possible to contact Airbnb when there’s an emergency. But if there are any other issues or questions, as a guest or a host, there are lots of hoops to jump through in order to talk to a person. Airbnb does have a number to call, but it is hard to find on their website. My personal experience has been that hold times can be very long, with customer support generally outside the U.S. and reps not adequately trained or equipped to deal with contextual situations. This isn’t like calling your cable company to do a modem reset; each situation is unique.

Airbnb handles some 500,000 stays daily…situations are bound to come up. Even though @AirbnbHelp can be very effective, when one is in a foreign place, it would be good to know that there’s an ability to call a person at AirBnB to get help, real time.

Another frustration is that Airbnb does not provide the ability the ability for a guest to contact a host until a reservation is actually booked, other than through its internal messaging system. Again, there are situations and contexts during the ‘reservation inquiry phase’ where electronic, asynchronous communication just doesn’t cut it. Airbnb has said they withhold contact information due to privacy concerns, but I’d imagine that another reason is AirBnB doesn’t want the guest/host to ‘go around’ its system in order to avoid fees. If a host is willing to provide their phone number to a potential guest, shouldn’t they be able to?

LinkedIn. This is a bit more of a B2B site, but still, I think that the issue of customer support still applies. LinkedIn does not offer any phone-based support, and chat support is uneven and unpredictable. E-mail support is through the dreaded “web form, with drop down options”, which, again, put the onus on the customer and lacks the ability to provide context. Now, the issues might not be of the ‘urgent’ B2C variety as with Uber or AirBnB, but LinkedIn is a large and fairly complex site, and getting any help figuring out how to best use LinkedIn or answering FAQs can be an unwieldy and time-consuming process.

Facebook. Whether it’s help using the site, posting an ad, or dealing with a more urgent issue such as customer privacy or an emergency type situation, it is difficult, if not impossible, to talk to a human being at Facebook. The company has a very extensive Help Center, with literally hundreds of forms, and a very active Facebook community. And I understand that with some 2 billion users across many types of services, high-touch customer support might be a huge challenge to undertake. But there are a few types of situations, specifically with regard to privacy, or other types of emergencies, where it would be good to know that one can get help from someone at Facebook, and quickly. I did a little research, and found some situations where, for example, a Facebook user was reporting unauthorized usage use of their child in photos, and they were told to ‘fill out a form’ by someone on the ‘Facebook Help Team’. Not very reassuring.

Now, folks might complain about the high cost of cellular or cable service, but at least you can call them for tech support…or argue about a bill!