Shipments and Market Share Matter; Even if Companies Say Otherwise

on May 24, 2019
Reading Time: 4 minutes

Apple recently stopped reporting quarterly hardware volumes in its earnings calls. Amazon has, famously, never reported its hardware numbers. Nor has Microsoft (for Surface). In fact, many companies don’t publicly state their hardware shipments, and more than a few suggest the unit number is less important than the revenue number. Obviously, revenue is hugely important, but the world still wants to know: How many did you ship?
Unit volumes are important because they help drive an industry-wide scorecard. We sum them up, and it tells us if the market is growing, flat, or declining. And it gives us important information about the status of the players inside that market and their relative position against the competition. Companies use the numbers to plan their businesses, their marketing, and even their employee bonuses.

Market research companies capture shipment volumes through different methods. At IDC, we use a very resource-intensive one that involves dozens of people across the world. It’s not a perfect system, and we occasionally make mistakes (when we do, we work to correct them). There’s been a fair amount of chatter about our numbers lately, and I thought it might be instructive to talk about our process.

Top-Down Process
IDC tracks new product shipments into the channel. Most of IDC’s tracker products publish quarterly, but the process of collection is a year-round job that we approach from the top down and the bottom up. Let’s start with the top down. Each quarter IDC reaches out to the companies we cover, and we ask for worldwide/regional/country guidance. Our worldwide team collects these numbers and distributes them to the dozens of regional and country analysts around the world. A remarkably large number of companies participate in this process, as they see the value in a third party collecting and disseminating these numbers. We look at these numbers as the starting point, not the finished product. As they say: Trust, but verify.

The process we use to verify is also the one we use to capture shipments for vendors that don’t guide us or report their numbers through earnings calls. This is a multi-pronged approach that includes our world-class supply-side team, our worldwide tracker team, and communication with IDC’s various analysts tracking component shipments.

IDC’s supply-side team resides in Taiwan, but they spend a great deal of time in China. They are in constant contact with component vendors and ODMs that are building the devices for the major vendors. Their relationships here have taken years to build and require frequent face-to-face meetings. The top-line numbers they collect, which include details such as which ODMs are building for which OEMs, deliver a critical fact-checking data point for our trackers, and help us move closers to a market total that includes smaller players (Others) that we don’t track individually.
Meanwhile, the worldwide tracker team is acquiring numerous import/export records from countries around the world. These files are expensive, big, and messy, and our team spends weeks cleaning them to get at their valuable data, which can include details such as SKU-level data and even carrier-destination for smartphones. This data is then passed along to the local analysts.

Finally, IDC’s various component-tracking analysts are collecting their information about processors, storage, memory, and more. These inputs—which obviously lag shipments of finished products—represent a third top-down data point that we use to triangulate on an industry total.

Bottom-Up Process
While the top-down processes are in motion, our regional- and country-level analysts are conducting a bottoms-up approach. One of the key steps is to reach out to the regional contacts of the vendors to ask for guidance. These calls help both IDC and the vendors track down possible internal errors in shipment distribution.

In parallel, dozens of local analysts are also accessing localized distribution data. Access to this data varies widely by country. In some places it’s a deep a well of important information, in other places it’s very basic, and in some places, it’s simply not available.

Concurrently, the local analysts are having ongoing discussions with the channel. Like distribution data, the level of inputs here can vary widely. In some places, channel relationships drive a great deal of very detailed information. In other places, the channel plays it close to the vest, and the analyst is forced to do more basic checks. In the end, the channel piece is an important part of the overall process.

Bringing It All Together
The various top-down and bottom-up processes culminate with a mad dash to input data, cross-check that data across inputs, fix mistakes, make new ones, fix those, and then QS the finished product. All to publish, typically, about eight weeks after the end of the quarter. Two weeks later, the same teams update their market forecasts. Another monumental effort, driven by a whole different set of processes.

Is the process perfect? Far from it. Do we make mistakes? Yes, but we try to acknowledge them and correct them. Different firms use different methods, but we feel ours is a good one. Sometimes that means we diverge from the pack in terms of a company’s shipments in a given quarter. If you see us doing so, it’s because we feel our process—and the information we’ve collected—has led us to a different conclusion. I should note that this process is becoming increasingly important as the secondary market for products such as high-end smartphones heats up, and a few companies drive real revenue through the sales of refurbished phones. IDC attempts to track these units in our installed base, but we work to keep secondary phone shipments out of our shipment numbers.

If a company says revenues or margin matter more than shipments, that’s not an unreasonable position to take. Especially in a slowing or declining market. However, you can bet that behind the scenes that company is still closely looking at shipment volumes and market share. In the end, markets need shipment data to track the health of their industry and the relative position of the players inside of it.

Make Digital Transformation about Your Business, not about Millennials

on May 23, 2019
Reading Time: 4 minutes

Millennials might be where your digital transformation journey starts, and their imminent control of the workforce might even put some pressure on your timing. Ultimately though, digital transformation should come from a more profound desire to look at your business processes and make them better, more efficient, more user-friendly.

Every presentation I see about digital transformation talks about talent shortage, which drives a highly competitive employment market and a stronger need to retain talent when you find it.

But what about the current workforce? A recent Gallup study showed that 85% of employees are not engaged or are actively disengaged at work. If you are interested in knowing how much that costs in lost productivity, Gallup estimates a whopping $7 trillion! Eighteen percent of the employees are actively disengaged in their work and workplace, while 67% are “not engaged.” This means that the majority of the current workforce is indifferent to the organization they work in.

While the Gallup report goes on to talk about how performance review and better management can help change this, I would argue that digital transformation could alleviate if not eradicate such apathy at work.

Engagement Makes for Successful Consumer Brands….

Millennials are not the only employees who care about their job. They’re not the only employees who want to collaborate, feel rewarded for the work they do, and expect to have the right tool for the job. Gen Xers want many of the same things. And I will guess baby boomers did too.

The big difference between millennials and Gen Zers and previous generations is that technology is not foreign to them. And this is not just about devices; it is about applications as well. Over the years, people have been talking about consumerization of IT in many different ways, but it is fascinating to me, that the core of what a consumer business focuses on has never been brought to the enterprise. And that core is the drive for engagement.

If you talk to any consumer brand, engagement is what they strive for. If they have an engaged audience, they have an audience that will very likely be loyal and will generate them revenue. This can be the same in an enterprise where the final user of technology is indeed a consumer. So, why has that rarely been a focus in the enterprise? Maybe it is because someone’s job has never been seen as an engagement opportunity on tap that can be turned on and off. But how can you have high productivity with disengagement? And if you don’t have productivity, how can you run a successful business and have loyal employees? Gallup clearly shows how much money disengagement will cost you, but the impact goes deeper than that.

Multiple factors drive disengagement. But lack of the right tools, lack of data, lack of an understanding of what the business imperatives are as well as work processes that get in the way rather than facilitate someone’s task are probably the worst offenders.

…So Why is Consumerization Bad in Enterprise?

Consumerization of IT has always had a somewhat derogatory connotation. For many IT managers, consumerization meant providing devices and applications that were not designed for enterprises and therefore not as capable or as sophisticated and certainly not as secure as the tools that an enterprise would choose.

The reality, though, is quite different.

When you’re looking at critical devices that have been successful with consumers, such as smartphones, there’s not a lot of difference between a smartphone that I use for work and one that I use for personal use. Long gone are the days when we carried two phones, one for work and one for personal use. Security has been baked in at an acceptable level in many smartphone models because what consumers do today requires the kind of security that enterprises also demand.

When it comes to apps, consumerization takes a different meaning. It’s not just about security; it is about putting the user first and designing something that is above all user-friendly, because that user-friendliness will drive engagement. Design, however, has never been a priority for IT departments which is why at Citrix Synergy event in Atlanta, this week, it was fascinating for me to listen to how the new intelligent digital Workspace is delivering two essential components to driving successful digital transformation.

First, Workspace builds on your existing infrastructure but adds the support for micro-apps to a streamlined landing page that will allow enterprises to look at the current workflows and all the applications that are used to complete a task and intelligently streamline those processes and look at predictable steps in an efficient workflow. You can see how, when you add data-driven-intelligence, to this concept, enterprises could be able to deliver to employees a set of workflows that are, in reality, best practices catered to their specific needs. Imagine the impact that this approach could have on a new employee onboarding program, for instance. This idea takes a page out of the consumer book where more and more services and apps are using AI to delivered a personalized experience something that employees will come to expect in their work environment too.

The second key component that Citrix is also able to deliver is Citrix Analytics and specifically, being able to measure the performance of an app and the infrastructure around it as well as score employees’ user experience. This is a critical move in shifting the way enterprises should think about return on investment of initiatives aimed at improving employees’ TOMO and business efficiencies. More often than not, the way enterprises want to measure the return on investment is by measuring productivity improvements based on old parameters that are a misfit to the new tools. Mobile is an excellent example of this struggle. When the smartphone era started, IT struggled to measure the return on investment that a smartphone deployment would have on employees. Soft targets such as employee satisfaction, coming from being able to complete a task while away on business or being able to start the day on the long commute to the office, were hard to measure. It took years before enterprises began to see the value of higher engagement that mobile offered. Higher customer satisfaction, higher employee satisfaction driven by flexible hours and remote working..and the list goes on. Shifting the burden of the return on investment on the tool rather than the employee will help assure that enterprises will not just do lip service to transformation but really focus on improving workflows.

Employees are consumers of the technology as well as  customers of the IT department. The sooner enterprises start seeing them in that light, the easier it will be to put them first driving their engagement at work and making them the best evangelist for their brand.

The Evolution of the PC

on May 22, 2019
Reading Time: 4 minutes

The PC form factor is not dead. It has proven quite resilient. Research study, after research study we have done at Creative Strategies has continually demonstrated the PC is the device the majority of the market comes back to for many of their primary workflows. This is not to say work can’t be done on a smartphone or a tablet, but that the PC is still the central hub of work for the masses. This resiliency of the PC form factor is leading to a number of new innovations and evolutions as consumers look for new hardware that fits their central workflows.

There have not been many considerable leaps in PC innovation in the past ten years. The PC industry has tried to make the PC more tablet-like, but the next frontier will be making the PC more like the smartphone. Interestingly, from some recent research we did, we asked consumers which features their smartphones have they wish they had on their PC/Mac.

I highlighted things like instant-on, all-day battery life, face authentication, and connectivity for a few specific reasons. Not only where these answer choices, the top ones consumers would like to see on their PC and Mac, but they speak to different needs and wants of the consumer. Face authentication, for example, speaks to the increased security desires consumers want to see on their PC that is now becoming common on all modern smartphones. Instant-on has been a function of our smartphones for years while most consumers still need to wait seconds, sometimes minutes, for their PC to boot up and be ready to use. A smartphone, generally, gets all-day battery life while most consumers experience less than 10-12 hours of battery life on their PCs.

Connectivity was the feature I was a bit surprised ranked as high as it did in our research. While I have personally been quite bullish on having a continuous connection to the Internet in the PC form factor, there has felt like moderate demand so far. Talk to anyone with an always connected iPad, and they will sing the praises of the convenience of never having to worry about an Internet connection. Having used a connected iPad for as long as they have existed, I continually found how my workflows would change when I’m mobile, and I’d choose to do a work task on iPad instead of my phone simply because I knew I had a connection. This was something we were curious to test in our study, so we asked consumers if they had a choice between their PC/Mac and smartphone to do certain tasks which one they would choose.

In an era where we debate how many jobs the smartphone takes from the PC, the reality is the PC is still better at many core work-related tasks. You see this show up in our research where, when given a choice, things like email, working on documents, and even watching videos are all everyday tasks consumers prefer to do on PCs. Again, all of these are possible on smartphones, but the PC is the right tool for the job.

PC Evolution
The evolution of the PC is happening because it is the right tool for many jobs. This is why one of the most interesting parts of the PC industry happening is the rich segmentation we see developing. There is no one size fits all PC form factor design but rather a wide range of notebook and desktop designs to fit the needs of changing market demands. This is a reason I’m glad Intel is seeing competition from Qualcomm and the Arm architecture is finally becoming relevant to PC designs that focus on highly mobile consumers.

We have written and analyzed Qualcomm’s Always-On and Always connected PCs quite a bit here on Tech.pinions, but with two years of product designs under their belts, each generation has seen improvements. Continual evolution in the PC sector demands competition with the underlying PC architectures that power them. Intel’s X86 architecture has dominated the PC industry, but Intel has always struggled at bringing extremely low-power products to market. Low-power, better battery life, instant-on, etc., are the staple features of smartphones powered by Arm and the Arm architecture is well positioned to bring these features to new notebook designs over the coming years.

Many of our writers here at Tech.pinions have had a chance to work with products from Lenovo and HP running Arm/Qualcomm solutions, and we have all been impressed with the incredible battery life they offer. This gives me hope, and consumers will see more of the value of devices that have true all-day battery life, are always connected, and have zero wait time to start being productive.

With Computex around the corner, expect to see many new designs of PCs that challenge the conventional wisdom of how a PC looks and functions. PC hardware makers are working to bring new innovations to market to fit the need of a dynamic and quickly changing PC category. The big trend you will see is how many of these new PC designs are starting to include many features that have been the standard in smartphones.

Evolution and Innovation in the PC category requires a new architectural approach with regard to processors. Intel knows this, and Arm and their partners like Qualcomm know this and healthy competition beyond X86 is good for the industry and consumers.

If you would like to see the full report/white paper we co-published with Arm, here is the direct link to the data and my commentary on the results of the research.

Citrix Advances the Intelligent Workspace

on May 21, 2019
Reading Time: 3 minutes

One of the frustrating conundrums of modern work is that even though there has never been a better, nor broader, range of tools available to get our jobs done, it’s still tough for many people to keep up. Thanks to a host of modernly architected, cloud-based SaaS (software as a service) applications such as Microsoft Office 365, Google G Suite, Salesforce, Slack, Workday, and others, as well as an impressive variety of devices upon which we can complete our tasks, it really is a great time to be an active member of the workforce. In theory at least.

In reality, this abundance of different cloud services, along with the inevitable group of custom and/or older applications that most organizations run, often makes employees feel significantly less productive than they believe they could, or should, be. From a dizzying array of application choices with overlapping functionality, to complicated, multi-step procedures, to overwhelming amounts of incoming information, too many workers spend a significant part of their days simply maintaining their communications and basic tasks. As a result, little time is left to do the productive, and typically more satisfying, work people were actually hired to do.

Numerous companies have attempted to tackle this growing challenge in different ways over the years and at this year’s Synergy event in Atlanta, Citrix unveiled the latest additions to their answer: Citrix Workspace intelligent experience. If you haven’t seen it before, Workspace essentially provides an organized view of your available applications, documents, messaging, tasks and more. Importantly, it provides a consistent and synchronized view of all this information across devices and platforms, meaning you get the same basic interface whether you’re using a Windows PC, a Mac, a Chromebook, an iOS-based phone or tablet, or an Android-based phone or tablet. Behind the scenes, Citrix-powered infrastructure does the hard work of virtualizing applications to make them function consistently across platforms, screen sizes, network connection speeds, and more.

The new interface of Citrix Workspace provides something akin to a News Feed that organizes your critical tasks and lets you easily see what you need to get done. More importantly, thanks to the company’s acquisition of Sapho last year, the new intelligent version of Workspace incorporates a number of macro-like “micro apps” that can work within, and even across, different applications to accomplish common tasks. So, for example, if you need to submit an expense report or IT help request, these micro apps can turn tedious, multi-step procedures into a single click or two. Citrix is also providing a simple drag-and-drop tool for companies to create their own custom workflow micro apps across (or within) both their legacy and modern applications.

Citrix also announced extended collaborations with Microsoft and a new set of services built in conjunction with Google. On the Microsoft side, Citrix debuted support for Citrix-managed desktops and Windows Virtual Desktops on Azure, among other things. Both of these provide the kind of incremental, but still important, extensions to the overall Citrix ecosystem that make their software and services more compatible and more seamlessly integrated into a wider variety of customer environments. The collaboration with Google brings significantly more integration with Google’s products and services into the Citrix environment, including the ability to run Workspace on Google Cloud. In addition, for Workspace, Citrix announced support for G-Suite and Google Identity services (allowing for single, authenticated log-ins). Other capabilities now supported on Google’s Cloud Platform include the ability to run Machine Creation Services for provisioning Citrix-based VDI (virtual device infrastructure) workloads.

All of these efforts point to a general strategy of building tools that can let companies, and their employees, reign in the chaos of modern work environments and function in an easier and more productive manner where they can get real work done—regardless of the types of environments and applications they currently run. Looking ahead, it’s easy to see how these kinds of capabilities can be further enhanced with AI, as procedures and workflows are learned and automated, as well voice-based input, to drive more natural interactions. Ultimately, the goal is to drive a true work assistant-powered environment that can ease the tedium of necessary, but time consuming tasks, and enable people to fully leverage the impressive range of applications and devices they now have at their fingertips.

Is There a Market for Foldable PCs?

on May 20, 2019
Reading Time: 3 minutes

Last week, I attended a Lenovo customer event in Orlando, FL where they introduced the fist foldable PC. I had a chance to play with it and it is a very solid product for what is deemed a prototype.

It was developed by Lenovo’s Yamato, Japan team, who created the stellar ThinkPad line of laptops. They are extremely well made and a top seller for Lenovo. I have visited this Yamato lab and am very aware of their skills and the quality of products that come from this group. Prototypes on average are normally mere shells of what they can eventually be. But this one looked close enough to ship, which is to say it was well made and very sturdy already.

There are some things that still need to be done at the hardware level before Lenovo ships their foldable PC and, the kind of software needed to really make this new PC design sing and dance, is still a ways out. Anytime we get a breakthrough product you can expect their prices to be very high at first. This will be a premium product and will be executive jewelry for tech big shots and focused on highly mobile pro’s who want something that is light weight, highly portable and when opened, gives them a 13-inch screen.

This is the first really new design in laptops since the 2 in 1’s were introduced 10 years ago. One cautionary note is that even today, 2 in 1’s are not big sellers and never became the big hit that Intel and Microsoft hoped they would be in the future.

A foldable PC may hit a nerve with some highly mobile workers who can afford them, but if history is our guide, they these new form factors may be more niche based products than ever gaining mainstream mobile computing status.

That said, the Lenovo foldable PC is so well designed that as the first major brand to bring one to the market, they could have a hit for themselves in two areas. First, they will be able to ride this great design towards securing themselves as one of the most innovative companies in the PC Business.

Second, if they put strong marketing behind it, their foldable PC could help set the tone for other PC makers to follow suit and create innovative designs of their own that might help popularize this new PC form factor.

Notebook and laptop clamshell designs have been pretty static since they were introduced in 1985. They have become thinner and lighter and more powerful, but the clamshell design has stayed pretty steady since they debuted.

Microsoft’s Surface portable brought the tablet PC combination to the market and shook up notebook designs and help expand the concept of 2 in 1’s that are now made by all PC vendors. While not big sellers, it did shake up the laptop computer market with a new form factor and some people swear by them and use products like the surface as their primary laptop computer.

Now a foldable PC has been introduced into the portable computing genre and Lenovo and other PC vendors who are working on similar products, hope it can gain traction as a new portable computing design that hits a nerve for some users. While they are excited about these new foldable PC’s, they know full well that it will never be as popular as traditional clamshells.

As one who has tracked the PC markets since its inception, I personally love the various experimentation that has been done on laptops over the years. We have had 3D- based laptops, laptops with side bars that hold speakers and many others that tried to push the deign of laptops in new directions.

Yet, consumers continue to vote for the traditional clamshell designs, and have made them the workhorse for productivity, education and for all types of consumer use. So, any new form factor trying to break the hold clamshells have on mainstream users will have an uphill battle.

Podcast: Lenovo Foldable PC, Samsung and Verizon 5G, Microsoft Minecraft Earth AR

on May 18, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing the release of Lenovo’s ThinkPad X1 Foldable PC, analyzing the impact of the US debut of 5G mobile service on Verizon with the Samsung S105G smartphone, and chatting about Microsoft’s upcoming mobile version of Minecraft in augmented reality.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

An End to ‘Wi-Fi Purgatory’ Is In Sight

on May 16, 2019
Reading Time: 3 minutes

Little has been written or publicly discussed about one of the most vexing consumer frustrations in wireless. I call it ‘Wi-Fi Purgatory’. It’s what happens when your phone hangs on to or attaches to Wi-Fi even when the signal is poor, and doesn’t ‘switch’ over to to a good cellular signal, which results in a very slow or non-existent connection. The phone just stays stuck on Wi-Fi, rendering it useless for data and voice connectivity. Even worse, battery drain accelerates as the phone strives to connect. The only solution seems to be to manually disable the Wi-Fi connection in order to be able to use the cellular network. And of course, this means the user also has to remember to re-enable Wi-Fi the next time they’re in the home or office.

This Wi-Fi Purgatory issue seems to be more prevalent among iPhone owners than Android users. And one of the big culprits is ‘Cable Wi-Fi’ — the millions of Xfinity/Spectrum/Cox hotspots that are not only in the home but spread around all sorts of indoor and outdoor locations. The cable companies have known about this company for years and have done very little about it.

The main cause of the issue is that the current Wi-Fi architecture uses a scheme called ‘listen before talk’, which leads to inefficiencies and latency when moving from access point (AP) to AP, or between LTE and Wi-Fi. The phone sort of ‘half connects’ to the network, resulting in this ‘purgatory’ issue, which to the user looks like hanging on to a bad AP or not properly switching to cellular.

There is potential for this problem to get even worse, with the proliferation of small cells, rollout of 5G in the mmWave bands, and new 802.11ax (Wi-Fi 6) APs that will have even greater range. In theory, it will be even more challenging for a mobile device to distinguish between licensed and unlicensed services as the distinction between Wi-Fi and cellular narrows. After all, mmWave is more like a “super hot-spot” from the outside in, while more sophisticated APs and services such as CBRS have many of the same characteristics, but from the inside out.

Fortunately, there is help on the way. The architecture of Wi-Fi 6, which is scheduled to be rolled out over the next year (see my piece in Fierce Wireless on Wi-Fi evolution), uses the same MAC and physical layer as wireless. There is also a feature called deterministic scheduling, which allows the radio to be used on a particular schedule, rather than a randomized schedule, which is used by previous generations of Wi-Fi. 3GPP-based cellular technologies are also deterministic, which means that Wi-Fi and cellular will be more harmonized. Scheduled, rather randomized access also allows for lower latency and a greater density of devices. This combination of capabilities and improvements should help with the ‘purgatory’ issue.

Industry also needs to play a role here. One way they can do so is to embrace approaches such as Passpoint, which is an industry solution to streamline network access in Wi-Fi hotspots and eliminate the need for users to find and authenticate a network each time they connect. Another key feature of Passpoint is that service providers can set policies that optimize whether users connect to Wi-Fi or LTE/5G. Passpoint has been around since 2012, but it has gained renewed momentum lately, for example being part of AT&T’s recent agreement with Boingo.

Three things need to happen in order for there to be visible progress on this issue. First, we need a critical mass of Wi-Fi 6 infrastructure and CPE to be deployed, which will take a couple of years. Second, more service providers — across the cellular, cable, and Wi-Fi ecosystem — need to adopt industry solutions such as Passpoint and other aspects of Hotspot 2.0. Third, we need both hardware and software upgrades at more venues, such as airports. Cable companies, which operate tens of millions of hotspots between them, need to upgrade their ‘Cable Wi-Fi’ infrastructure, and also, simply, pay some attention to this issue.

There’s also work required in the iOS and Android camps, both natively on devices and in software. I haven’t seen a lot of tuning or adjustments with regard to the order in which SSIDs are prioritized on devices. With all the improvements coming down the pike, the ‘old order’ of how this was done will require a re-look.

We’re at a fork in the road on this Wi-Fi Purgatory issue. With the narrowing of the delta between fixed and wireless networks, and between licensed and unlicensed, complexity will only increase. Which means things could get worse. Fortunately, key vendors and organizations such as the Wi-Fi Alliance have been developing solutions to address the issue. Service providers, device OEMs, and venue owners must also do their part.

Are We Paranoid about Digital Assistants in the Home?

on May 15, 2019
Reading Time: 5 minutes

Over the past few weeks, there have been different stories around digital assistants and eavesdropping, and about Alexa in particular. The microphones and cameras that are proliferating in our homes have been a concern since the inception of the connected home. Our concern with cameras and microphones does not start there though. People have been paranoid about those for years and went through the practice of putting tape on their camera and silencing their microphones. So much so that in 2019, the camera’s physical shutter has become an applauded feature of many enterprise notebooks.

When it comes to the home, our level of concern grows. Trying to think logically as to why that is the case might not be the best way to find an answer.

It Is not about Technology

I think that Amazon has been quite mindful, from the very beginning, about the level of trust that putting a device like an Echo in the home requires. Alexa’s blue lights were undoubtedly designed to increase our comfort level by signaling two quite subtle things: Alexa was hearing us, and Alexa was listening to us. Although it seems the same, these two are two separate things. Alexa must hear your voice, and Alexa is listening to what you say to act on your command after she hears the wakeword.

What I think is less clear to most consumers, is how Alexa and the cloud communicate and the link between what you say, what Alexa hears and what Alexa then hands over to the cloud to be processed. To get an answer, people continue to believe that Alexa listens, all the time. And therefore, Amazon does too!

This renewed attention to what digital assistants can hear in our homes was first driven by news that a group of Amazon employees and contractors are tasked with listening to Alexa’s recordings to transcribe them, annotate them and then feed them into the model to make Alexa smarter. Amazon responded to the article by explaining that only a small number of recordings is annotated and that employees do not have direct access to information that can identify the user or their account. All information is treated with high confidentiality with the use of multi-factor authentication and encryption. Amazon also gives users the option not to have their recording used to improve Alexa.

Using humans is not a practice limited to Amazon, both Apple and Google use similar methods. Apple reviews recording without personally identifiable information and stores them for six months tied to a random identifier. Google accesses some audio from its assistant, but it’s not associated with any personally identifiable information, and the audio is distorted, and the data is randomized.

Last week Geoffrey Fowler wrote an article for the Washington Post  discussing the finding of his recordings investigation. Every user can access those recordings through the Alexa app under the privacy settings, so I went to take a look at mine. I found that there were three categories of recordings:

  • Clear commands: Alexa set a timer, Alexa stop, Alexa play….
  • Unknown: where I usually found TV recordings that were mostly intellegibile
  • Audio was not intended for Alexa. This was by far the category that was most interesting as it involved random conversations around the house.

This last category is the one that I am sure most readers who use Echo devices would be creeped out about. Yet, in the review of my entire history, I did not find any snippets to be longer than a few seconds, certainly not enough to be meaningful. The most exciting revelation of this exercise was how many timers a human being can actually set thanks to Alexa with the implicit question of how this task was performed before we had Alexa!

It Might Be Meaningless Info, but It Is Still Info

In my case, the information was all meaningless, but to be honest, I do not think it is the value of the information that in this case should determine who gets to listen to it or gets to use it. It is fascinating, however, that in a study we, at Creative Strategies, ran in 2017 only 5% of the 800 American respondents said that privacy mattered in connected home devices. This compared to 60% who said smartphones and 30% who said PCs. If you think about it logically, on average, there is much more sensitive information on a smartphone or PC which fully explains our results.

So why the concern now? Because it is our home and we think our home is private by default, we even say “in the privacy of my own home.” So feeling like this is no longer the case because of technology is a compromise too many for some consumers. I think the same can be said about the concerns around the human component. Remember when you had to make calls using an operator who could listen to all your conversations? I don’t cause I was not alive then, but plenty of old movies have scenes like that. And what about every time you take a Lyft or an Uber, and they know your name and address. Or again, every time you have an inquiry with a service provider or a government official, and you have to share your social security number — all these examples of exchanges of relevant information. The difference here is that all those exchanges are informed.

Transparency and Controls

At the end of the day, as it is the case with all the data we generate, being social media, smart home, smart office, smartphones, PCs, we want to be aware information is being collected. On top of that, we want to be able to decide if we are happy with it or not and we also want to be able to change our mind about it. And finally, we want to make sure our data is secure and not misused.

Most brands give a good level of control to users. You can decide not to store your recordings and not to help improve their assistant. I also believe that as smart speaker, and smart homes, penetration grows and moves beyond early adopters a more precise explanation of what happens when you initiate a request with Alexa, the most used assistant in the home, would be beneficial. I know some people might want to suggest Alexa could add more warnings, so people are more aware of her presence. But it is a delicate balance Alexa needs to strike between making herself known and becoming a nuisance. I do wonder if Alexa having an “incognito” mode like what Google announced last week for search and maps could help her case.

I am sure we will see more experimentation in this area. We need to remember that for Amazon, a higher level of trust means higher engagement with Alexa which in turns drives more revenue. So even if you are more skeptical than I am about Amazon’s intent, I think you would agree with me that it does not make business sense not to strive for a high-level of trust.

Lastly, I also cannot help but think that what happened with Facebook has impacted consumers’ trust across the board and has put other brands under the microscope. The reality is that Amazon, Google, Apple and every other brand that “sells” you a smart device or a smart solution will need data about you to create such a thing. Data is what powers AI, so you first must decide if your data is a currency you are willing to spend to live a smart life and then you must decide who is worthy of it.

Next Major Step in AI: On-Device Google Assistant

on May 14, 2019
Reading Time: 4 minutes

The ability to have a smartphone respond to things you say has captivated people since the first demos of Siri on an iPhone over 7 years ago. Even the thought of an intelligent response to a spoken request was so science fiction-like that people were willing to forgive some pretty high levels of inaccuracy—at least for a little while.

Thankfully, things progressed on the voice-based computing and personal assistant front with the successful launch of Amazon’s Alexa-powered Echo smart speakers, and the Google Assistant found on Android devices, as well as Google (now Nest) Home smart speakers. All of a sudden, devices were accurately responding to our simple commands and providing us with an entirely new way of interacting with both our devices and the vast troves of information available on the web.

The accuracy of those improved digital assistants came with a hidden cost, however, as the recent revelations of recordings made by Amazon Alexa-based devices has laid bare. Our personal information, or even complete conversations from within the privacy of our homes, were being uploaded to the cloud for other systems, or even people, to analyze, interpret, and respond to. Essentially, the computing power and AI intelligence necessary to respond to our requests or properly interpret what we meant required the enormous computing resources of cloud-based data centers, full of powerful servers, running large, complicated neural network models.

Different companies used different resources for different reasons, but regardless, in order to get access to the power of voice-based digital assistants, you had to be willing to give up some degree of privacy, no matter which one you used. It was a classic trade-off of convenience versus confidentiality. Until now.

As Google demonstrated at their recent I/O developer conference, they now have the ability to run the Google Assistant almost entirely on the smartphone itself. The implications of this are enormous, not just from a privacy perspective (although that’s certainly huge), but from a performance and responsiveness angle as well. While connections to LTE networks and the cloud are certainly fast, they can’t compete with local computing resources. As a result, Google reported up to a 10x gain in responsiveness to spoken commands.

In the real-world that not only translates to faster answers, but a significantly more intuitive means of interacting with the assistant that more closely mimics what its like to speak with another human being. Plus, the ability to run natural language recognition models locally on the smartphones opens up the possibility for longer multi-part conversations. Instead of consisting of awkward silences and stilted responses, as they typically do now, these multi-turn conversations can now take on a more natural, real-time flow. While this may sound subtle, the difference in real-world experience literally shifts from something you have to endure to something you enjoy doing, and that can translate to significant increases in usage and improvements in overall engagement.

In addition, as hinted at earlier, the impact on privacy can be profound. Instead of having to upload your verbal input to the cloud, it can be analyzed, interpreted, and reacted to on the device, keeping your personal data private, as it should be. As Google pointed out, they are using a technique called federated learning that takes some of your data and sends it to the cloud in an anonymized form in order to be combined with others’ data and improve the accuracy of its models. Once those models are improved, they can then be sent back down to the local device, so that the overall accuracy and effectiveness of the on-device AI will improve over time.

Given what a huge improvement this is to cloud-based assistants, it’s more than reasonable to wonder why it didn’t get done before. The main reason is that the algorithms and datasets necessary to run this work used to be enormous and could only run with the large amounts of computing infrastructure available in the cloud. In addition, in order to create its models in the first place, Google needed a large body of data to build models that can accurately respond to people’s requests. Recently, however, Google has been able to shrink its models down to a size that can run comfortably even on lower-end Android devices with relatively limited storage.

On the smartphone hardware side, not only have we seen the continued Moore’s law-driven increases in computing power that we’ve enjoyed on computing devices for nearly 50 years, but companies like Qualcomm have brought AI-specific accelerator hardware into a larger body of mainstream smartphones. Inside most of the company’s Snapdragon series of chips is the little-known Hexagon DSP (digital signal processor), a component that is ideally suited to run the kinds of AI-based models necessary to enable on-device voice assistants (as well as computational photography and other cool computer vision-based applications). Qualcomm has worked alongside Google to develop a number of software hooks to neural networks they call the AndroidNN API that allows these to run faster and with more power efficiency on devices that include the necessary hardware. (To be clear, AI algorithms can and do run on other hardware inside smartphones—including both the CPU and GPU—but they can run more efficiently on devices that have the extra hardware capabilities.)

The net-net of all these developments is a decidedly large step forward in consumer-facing AI. In fact, Google is calling this Assistant 2.0 and is expected to make it available this fall with the release of the upcoming Q version of Android. It will incorporate not just the voice-based enhancements, but computer vision AI applications, via the smartphone camera and Google Lens, that can be done on device as well.

Even with these important advances, many people may view this next generation assistant as a more subtle improvement than the technologies might suggest. The reality is that many of the steps necessary to take us from the frustrating, early days of voice-based digital assistants, to the truly useful, contextually intelligent helpers that we’re all still hoping for are going to be difficult to notice on their own, or even in simple combinations. Achieving the science fiction-style interactions that the first voice-based command tools seemed to imply is going to be a long, difficult path. As with the growth of children, the day-to-day changes are easy to miss, but after a few years, the advancements are, and will be, unmistakable.

More Tariff’s are Bound to Impact Tech In Multiple New Ways

on May 13, 2019
Reading Time: 4 minutes

Last week, President Trump enacted Level 3 tariffs on goods made in China, to the tune of $200 billion dollars. Here is the list of what will be charged in the new tariffs.

The list includes a huge amount of food items as well as tariffs on hundreds of materials like Zinc Oxide, Nickel Ore, Titanium ores, silver ores, some types of silicon and other materials that go into all types of toys and tech products.

There has been much talk about how this round of tariffs would impact companies like Apple, Dell, HP, Lenovo and other major tech companies who create millions of smartphones, laptop, printers, etc.

While some of the components that are used in these tech products could impact their cost, I am told by ODMs that for the most part, this round of tariffs will have minimal impact on these products. One exception is servers. There are some things on this list that could add additional cost to servers created in China, but at the moment, it is still too hard to determine how much this new round of tariffs will truly impact server costs. As you can imagine, figuring out additional material costs due to tariffs is a painstaking process, and it may be a week or so before we get a real idea of how much these tariffs will add to the cost of some toys and tech products.

Sometime on Monday, May 13, 2019, the White House will release a list with what would be level 4 tariffs which would amount to another $325 billion dollars of products. Although it is not clear as of this writing what will be in these new tariffs, suppliers I talk to in Asia tell me that they have been warned that a level 4 round most likely will include some finished goods too, including laptops, tablets, smartphones, and printers.

As you can imagine, this has the tech vendors who create these types of products, watching very closely. Some economists point out that the 4th level of tariffs, including things like finished goods tech products, is more of a bargaining ploy by the US to try and get concessions from China. On the other hand, Chinese delegates left Washington last Friday night without a deal, and Trump and the team has given them another four weeks to resolve this tariff stalemate. Lobbyists for the tech vendors have been warning Trump and the White House that tariffs on tech products that have become fundamental to our daily lives would have a dramatic impact on the companies creating these products, as well as the economy.

A Bloomberg article posted after the Level 3 round of tariffs went live, stated the following:

“This week’s tariff move is likely to have significant short-term consequences for retailers and other U.S. businesses reliant on imports from China. But extending it to all trade would increase the economic and political stakes even further for Trump and American companies.

Such a step would see price increases on smartphones, laptops and other consumer goods — the kind that Trump’s advisers have been eager to avoid, out of concern for the fallout. It would likely provoke further retaliation, and some economists are predicting it could even tip the U.S. economy into recession just as Trump faces re-election in 2020.

This 4th round of tariffs is what Tim Cook, Micheal Dell, and other tech leaders have been deeply worried about and in Tim Cook’s case, he has personally lobbied against them. The one thing in favor of Apple, Dell, HP, and other local US companies is that Trump sees them as showcase companies and this is one of the reasons why they have avoided any real impacts from tariffs so far.

But Trump and the White House are running out of things to charge tariffs against, and it is most likely that should a China deal not go forward, even after level 3 tariffs have been levied, it may be impossible for the big tech giants to avoid being caught in this next round of tariffs.

Another thing that could impact Apple other tech companies is if China decides to retaliate by placing tariffs on US-based products coming into China. As the WSJ points out, Apple’s China business would come under this type of tariff retaliation, and it could impact their China business, that is already struggling.

Whether we go to a level 4 tariff round or not, the big tech companies already see the writing on the wall when it comes to China. As I stated in a recent Think.Tank column China has a 100-year plan in which they want to have more control of their destiny and that their own manufacturing facilities could be turned inward.

So, many of the big tech companies are now starting to look outside of China to countries like Viet Nam, Malaysia, Thailand, Cambodia, India, and Mexico to invest in new manufacturing facilities in these countries, to offset any potential issues with Chinese manufacturing capabilities in the future. Indeed, at least one major OEM will have moved a significant part of manufacturing or assembly of notebooks out of China by late this year.

Of course, there will be a lot of political jockeying in the next four weeks and these companies, along with most of America, are hoping for some resolution that keeps level 4 tariffs from ever seeing the light of the day.

But if the US and China cannot come to a resolution soon and level 4 tariffs do kick in yet this year, you can probably expect to pay significantly more for laptops, smartphones, and printers as early as Q4.

Podcast: Microsoft Build 2019, Google I/O 2019

on May 11, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from the big developer conferences hosted by Microsoft and Google, including advancements in smart assistants, browser privacy, and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Google Tries Again With Pixel 3A

on May 10, 2019
Reading Time: 3 minutes

What a long, strange trip it’s been for Google’s branded smartphone the Pixel. After three bites at the apple with premium-priced products, achieving decidedly mixed product results (and outright poor market results), the company is trying again with new, mid-priced phones. The new $400 Pixel 3A and $479 3A XL look compelling, but can they change Google’s fortunes in the market?

Google’s Hardware Journey
For many years, Google worked with vendor partners to bring to market Nexus phones that highlighted what it saw as the best Android experiences. While die-hard Android fans loved the various Nexus products over the years, the phones never shipped in large volumes. In 2011 Google bought Motorola for $12.5B but continued to work with other vendors to ship Nexus phones. Three years later, Google sold Moto to Lenovo in 2014 for $2.9B. In 2016, Google launched the first Pixel phone as a premium-priced offering, shipping the 5-inch Pixel ($649) and Pixel XL ($769).

According to IDC’s Mobile Phone Tracker, Google shipped about 1.8 million units that first year (about 0.1% of the 1.5 billion-unit smartphone market). For reference, first-place Samsung shipped 311.4 million smartphone units that year; Apple shipped 215.4 million. Of course, it’s not entirely fair to compare Google’s volumes to that of Samsung and Apple, two well-established players in the space. More to the point, Google launched Pixel in a limited number of countries. However, when Google executives talked about the Pixel, they clearly saw it as a competitor to the flagship phones of Samsung, Apple, and others. However, while the company talked a good game about the Pixel, it didn’t do any of the things necessary for real success. Limited distribution, limited marketing, and limited carrier outreach. Most industry watchers assumed this was because the company was taking a slow, measured approach to the market.

In late 2017, Google launched the Pixel 2 ($649) and Pixel 2XL ($749). The new phones met with a similar market reaction, and the company also ran into issues with screen quality on the XL. While many reviewers began to credit Google for the innovative work it was doing with the Pixel camera, the company continued to limit the countries where Pixel shipped. It also often stuck with its direct-sales model. For example, it worked with just one carrier in the U.S. (Verizon). Total worldwide sales for Pixel, Pixel 2, and Pixel 2 XL reached just 3.5M units in 2017, representing 0.2% of the worldwide market.

In late 2018 Google launched the Pixel 3 ($799) and Pixel 3XL ($899), doubling down on the camera features and bringing to market the amazing Night Sight feature for low-light photography. The new phones once again shipped at premium prices, and Google continued to limit distribution and marketing. Despite the largely positive reviews, the company’s volumes in 2018 grew to just 4.6M units or 0.3% of the worldwide market. In the first quarter of 2019, a tough market all around, Pixel volumes declined year over year.

Once More, With Feeling
This week, Google announced the Pixel 3A and 3A XL. To reach the new, lower prices, Google made some pretty dramatic hardware changes, including lower-end Qualcomm processors and plastic bodies. However, as most reviewers have noted, the new phones still offer Google’s top-of-the-line camera features, which are driven largely by software and not hardware. From a device perspective, the new Pixels looks compelling at their price point.

Perhaps more importantly, Google seems intent on actually moving some units this time out. It’s working with more carrier—all four majors in the U.S.—and I’m already seeing some marketing around the phones, which is good. It’s an interesting strategy, if not completely original. The Lenovo-owned Motorola has been pushing this angle for several years, with mixed to positive results. The challenge is that the price band where the new products land—between $400 and $500—represents just over 5.5% of the worldwide market, and it’s not growing. Moreover, the volumes are even lower in the U.S. Speaking broadly, the high end and the low end is where all of the smartphone action has been. We’ll soon know if Google’s move to the middle is a smart one, or simply another misfire.

I continued to be somewhat perplexed by Google’s hardware ambitions. Many have suggested that the company continues to play in the hardware market to drive best-of-breed devices, to experiment with the intersection of hardware and software, and to keep its OEM partners on their toes. However, in 2019, with the broader smartphone market slowing or declining in many regions, this seems like folly.

Through the launch of the Pixel 3, it seemed the company might be following a similar playbook to Microsoft’s Surface portfolio, shooting to grab a sizeable chunk of the premium market. However, its marketing and distribution have guaranteed this won’t happen. And now the company is entering the shrinking mid-range space, and it’s hard to see why it would do this unless it were shooting for increased unit volumes. While it is working with more carriers in countries such as the U.S., it doesn’t appear Google is expanding distribution into more countries (at least yet).

In the end, it will be interesting to see how the new Pixel 3A phones do in the market. Regardless of the unit volumes through the rest of 2019, however, I can’t help but continue to wonder what Google’s ultimate goal is with the Pixel. Sometimes I wonder whether it knows the answer itself.

Privacy is Complicated

on May 9, 2019
Reading Time: 5 minutes

It has been a busy couple of weeks for privacy, and I am sure it will continue to be so for a while. We started last week with Mark Zuckerberg at F8 saying: “The future is private.” Then on Monday, at Microsoft Build, Satya Nadella said: “Privacy is a human right” echoing the words that Tim Cook at Apple has been using for quite some time. On Tuesday, at Google i/o, Sundar Pichai said that “Privacy and security are for everyone, not just a few.” Different pledges to offering more privacy and security to their users but also varying degrees of delivery.

Business Models and Core Competencies

Why all this focus on privacy now? We have to thank Facebook for it. Privacy has always been important, but the escalation we saw on both the amount of time tech companies address this topic and the amount of scrutiny they are under by government and regulators, which of course are intertwined, started with the Cambridge Analytica debacle.

The different responses tech companies have on privacy is highly dependent on their business model. It is the business model that created a split between Microsoft and Apple, who monetize from their products, on one side and Google and Facebook, who monetize from advertising, on the other. But this split seems to have been made less clear this week as Google’s focus on privacy materializes in concrete steps to give more control to users over their data.

As I was watching Sundar Pichai’s keynote, I commented

 

Pichai said on stage: “We always want to do more for users but do it with less data over time.” If you are skeptical about this, you can look at Google’s track record and see that they have changed their ways over time. In 2014, Google stopped tracking email content for ad targeting in the student version of Gmail. In 2016, they stopped scanning emails in Gmail before they hit the inbox and in 2017 they stopped doing so altogether. Of course, it would be disingenuous not to point out that the first two changes were in response to lawsuits, but the last one was driven by a business model change which generated from Google moving into the enterprise with G Suite.

Another point of confidence on my part comes from the core competence that Google has in AI. Pachai spoke about how AI plays a role in enhancing users’ privacy and then moved on to talk about federated learning. While federate learning is important and something that Google first talked about in 2017 I ultimately think what makes more of a difference is that their AI models have benefitted of vast amounts of data and they have learned what matter and what does not. They have also learned how to use data more efficiently. Put it this way it sounds a little bit less altruistic than depicted on stage, but the benefit to the consumer remains.

The rat and the dent

One criticism that has been made to Google after Tuesday’s keynote is that the added focus on privacy is putting the burden on the users rather than the responsibility on the services and tech that Google offers. It is up to the user to go and change the settings across Google’s apps so that their data is not tracked or stored. Credit to Google, they did make it easier than it currently is to go and find your settings and change them. But, even so, most users will not bother.

Most users won’t bother changing settings for two reasons. First, consumers see the value that Google having their data brings to their experience. Pichai even said it on stage: “data makes your experience better.” The other reason relates to something that Microsoft CVP Julia Liuson earlier in the week called the rat and the dent syndrome. If you have a dent on your phone, you are unlikely to do anything about it although you might be complaining about it all the time. But if you find a rat in your home, you will do something right away. I think lack of privacy for many consumers is a dent, not a rat. They complain about it, but when given a chance to do something about it they will likely pass at the opportunity especially if impacts their convenience.

It goes without saying that this dent syndrome favors Google as they provide the tools which make them compliant with regulations, but they are unlikely to see an impact on the level of data consumers will share. Of course, the dent can quickly turn into a rat, the moment you are caught doing something wrong and doing so intentionally, as the backlash towards Facebook has clearly illustrated.

Competitive Advantage

From a pure marketing perspective, it is clear to me that talking about privacy as a competitive advantage is going to be more complicated. The conversation might have to shift from “we care about your privacy” to explain why the business model a company is based on allows them to put privacy first for their customers but also how the same business models make it so that it is not possible to deliver that level of privacy to everybody. This is at the core of what Pichai had in his New York Times article this week:

“Privacy cannot be a luxury good offered only to people who can afford to buy premium products and services. Privacy must be equally available to everyone in the world.”

A statement that was aimed at Apple, the same way his keynote comment was:

“So far, we’ve talked about building a more helpful Google; it’s equally important to us that we do this for everyone, for everyone is a core philosophy for us at Google. That’s why from the earliest days, search works the same whether you’re a professor at Stanford or a student in rural Indonesia. It’s why we build affordable laptops for classrooms everywhere. And it’s why we care about the experience on low-cost phones in countries where users are just starting to come online, with the same passion as we do with premium phones.”

But helpful Google also fits well with their business model. Putting advertising aside for a moment, it is evident that if you are in the services business, you want to reach as many users as possible with your solutions which is what Google is doing. It will be interesting to see what changes will come from Apple’s move into services not as far as privacy as Apple already made it clear they will not track what you read or watch through their services, but as far as devices reach.

Business models also get caught up in doing good, being helpful, advocating for the people marketing message. Microsoft was the first brand strongly advocating for ethical AI and technology to empower all people. Google this week used similar talking points:

“And it goes beyond our products and services. It’s why we offer free training and tools to grow with Google, helping people grow their skills, find jobs, and build their businesses. And it’s how we develop our technology, ensuring the responsible development of AI, privacy, and security that works for everyone, and products that are accessible at their core. Let’s start with building AI for everyone.”

Yet both companies have faced criticism for providing their technology to governments and helping with surveillance of the very people they want their technology to help.

It is indeed complicated. When it comes to privacy, security, and ethics, between the black and white of right and wrong, there seem to be so many shades of gray that companies can use to position their business. Marketing aside, however, consumers’ decisions on who they trust will be driven by both rational and irrational components. The intent companies demonstrate putting the users first or slipping on their promise, as well as the value that consumers get from the technology and services these companies provide will both play a role in who they trust.

Microsoft Bot Frameworks Enable Custom Voice Assistants

on May 7, 2019
Reading Time: 3 minutes

If you’ve read much about the concept of digital transformation, you’ve probably heard the idea that all companies are becoming tech companies because of the overwhelming influence that technology is having on industries of all types. On one hand, this makes sense, because technologies like cloud computing, AI, and more are enabling a level of involvement with customers and partners that simply wasn’t possible without them. On the other hand, the practical realities are that most companies don’t have the in-house technical expertise to deliver on that promise. This is particularly true with regard to the more advanced technologies—like voice-based digital assistants—that today’s tech industry leaders are starting to offer.

At the company’s Build developer’s conference in Seattle, Microsoft announced important efforts designed to bridge the gap by democratizing the use of technologies like AI and natural language processing in “non-tech-native” companies. The company delivered AI and cloud computing-related enhancements at this year’s conference that ranged from autonomous systems to IoT and beyond. The ones I believe will have the most important long-term impact, however, are those that allow regular organizations and IT people who aren’t data scientists to take advantage of these powerful new technologies. Features like Intellicode, which is built into the latest 16.1 release of their Visual Studio programming application, eases the process of creating AI-enhanced apps, and the new enhancements to their Azure chat bot frameworks look to be particularly interesting.

Voice-based personal assistants, like Amazon’s Alexa, Google’s Assistant, Apple’s Siri, and Microsoft’s Cortana have had a particularly profound impact on the consumer market, but many businesses immediately recognized that they had important potential applications in business as well. To that end, we’ve seen initiatives like Alexa for Business get unveiled, as well as a number of efforts to create automated chatbots for online support and other applications. One of the challenges that companies who want to use these capabilities face, however, is that the “partner” provider—such as Amazon in the case of Alexa for Business—becomes the means by which customers interact with their products. So, for example, a car maker that integrates Alexa into one of their automobiles would lose the direct interaction between their brand and their customer. Needless to say, while that’s great for Amazon, it’s not ideal for the car manufacturer.

In order to overcome that gap, Microsoft announced a number of enhancements to their bot frameworks that essentially allow customers to put their own brand back into the conversation, but leverage Microsoft’s natural language processing and conversational AI tools in the background. In essence, Microsoft is serving as a platform provider and allowing partner companies like BMW and Coca-Cola to “white label” these technologies in their own products. So, for example, BMW can offer its customers a voice assistant in its new high-end cars, but the entire experience is BMW-branded and BMW-controlled.

This voice-based customization strategy looks to be a particularly smart move for Microsoft, because the company has had little success in getting people to regularly use their Cortana assistant. At this year’s Build, Microsoft did demonstrate an impressive business-focused application of Cortana, that showed it taking some important steps to becoming a truly useful personal assistant (as opposed to more of a gimmicky reminder setting tool). Thanks to new technology it acquired when it purchased a company called Semantic Machines last year, Cortana is adding the ability to switch between contexts, take specific actions and generally respond in a significantly more intuitive manner.

But the beauty of Microsoft’s platform-type approach for conversational AI means that while Microsoft Cortana-branded features will have these capabilities, eventually, so too will other customers who want to integrate a responsive, effective voice-based UI into their products and services but in the context of their own brand. Frankly, it’s a classic example of Microsoft thinking more about the platform opportunities these types of services can enable than simply the Microsoft-branded functionality that something like Cortana can provide.

As AI pervades all types of applications, industries, and products, it just makes sense that we’re going to end up with a very diverse set of voice-based assistants across a wide range of products. While Alexa or Google Assistant-branded assistants will certainly be appealing for certain users and/or certain devices (and could end up with a larger individual share of the total market than, say, Cortana or any specific application of Microsoft technology) the combined opportunity for voice assistants is likely going to be so large that individual offerings won’t matter as much. That larger opportunity is what Microsoft appears to be targeting with these new efforts, and while only time will tell, it certainly seems to be a strong strategic move on the company’s part.

We’re still in the very early days of voice-based UIs and other AI-powered capabilities, but it feels like we’re starting to get a better glimpse into how these markets are evolving (and how non-tech companies can take advantage of these capabilities and customize them for their own purposes). The image ahead looks exciting.

The Booming Boomer Market for Apple Watch

on May 6, 2019
Reading Time: 4 minutes

In 1997, while on one of my trips to Japan, I spent some time with local executives who were in the wireless business. During one of our dinners, we started talking about the culture in Japan and how their elders are revered. I asked one of them about the role technology played in the lives of the elderly, and they told me a fascinating story about how WIFI was used in parental eldercare.

It turns out that for most of the elderly in Japan, they still observe an age-old tradition of having tea around 4:00 PM each day. This tea ceremony is done like clockwork. At the same time, their children, who are now grown and may have their own families or are full-time salarymen and work long hours, wanted a way to check in on their parents daily to see if they are doing ok. Remember, this was in the days before smartphones and cellular was broadly available and accepted technology.

Knowing that these elders would have tea each day at 4:00 PM, they worked with some makers of teapots to add WIFI and motion sensors to them and created an algorithm so that every time the parent initially lifted the teapot, it would send a message to their grown children’s PC to alert them. That way they knew that the parents were having tea, which translated into them being relatively ok.

Today, wireless eldercare is already a big market. From using Find My Friends like apps to determine aging parents location, to giving them technology that can send instant alerts if they have fallen, need to contact a relative, or call 911. Or they could even call them to see how they are doing. So there are now many ways for grown children to keep in touch with parents as needed.

But one of the un-reported technologies being used by elders is the Apple Watch, and more specifically, grown children buying them for their parents to encourage them to use it to monitor their health. This is quietly becoming a significant market for Apple.
Although I cannot find any reports or numbers that tell us how grown children are buying many Apple Watches for their parents, I hear a lot of anecdotal feedback on this. And it makes sense.

Gen Xers and millennial’s are busy with their careers and family and have parents that are in their mid to late 60’s or early 70’s who are beginning to deal with health issues they did not have when younger. This younger generation has become more health conscious and is more in tune with using things like the Apple Watch, Fitbit, etc. to monitor their health and want their parents to do the same.

In my case, I wear the Dexcom continuous glucose monitor and can share my blood sugar readings with key family members 24X7. My biggest fear as a person with diabetes is low blood sugar that saps my energy and can be very dangerous if it gets too low. Sometimes I can’t feel my blood sugar going lower, but my Dexcom monitor knows and sends designated family members and me an alert. More than once my phone was in silent mode, and I could not feel the alarm, but one of my family saw the warning on their Apple Watch and called me to make sure I took something to bring my blood sugar up to safer level.

With the various health apps that monitor a person’s health and the ability to share real-time health data with family members, the Apple Watch is becoming much more valuable to the care of aging parents. I believe the majority of grown adults buying Apple Watches to help aging parents monitor their health and keep moving comes out of real concern. But in talking to some who have bought Apple Watches and health monitoring wearables for aging parents, they have admitted that part of the motivation for this is due to the guilt of not being near their parents so they can check up on them in person. Or are so busy that, even if close to them, are not proactive in connecting with them more often.

However, all those I have spoken with who have bought Apple Watches for their parents say that they have a real concern for their parent’s health and are glad to have a wearable technology that can monitor the health and summon immediate help if needed. They also like that it motivates them to move and exercise too.

There are a lot of wireless monitoring services for health care that use WIFI or Cellular for location tracking. One of the more interesting ones comes from GTX Corp, which manufactures the GPS SmartSole®. This sole can be slipped in a loved one’s shoes and can monitor their location 24/7.

Another innovative one comes from Trusense. TruSense integrates with technologies like the Echo Dot and includes a motion sensor, contact sensor, smart outlet, and hub that all work together to provide real-time data for caregivers.

But the Apple Watch, which can also be used for location tracking, has a dedicated focus on health monitoring and is increasingly becoming the kind of product that grown children are buying for their aging parents to not only track their health but to encourage them to move and be more active.

While it is difficult to get numbers on how many Apple Watches and Fitbit’s are bought buy grown children for their aging parents with an eye on helping their parents deal with health issues and stay closer in touch, you can see how this segment of the market for Apple and others is attractive. While none of these companies have created any ads for this market segment yet, it would be a good one for Apple and others to target as the aging population will be 47 million in 2020. https://www.urban.org/policy-centers/cross-center-initiatives/program-retirement-policy/projects/data-warehouse/what-future-holds/us-population-aging

My parents had health issues as they got older. I was traveling so much that I was highly negligent in keeping in touch with them and making sure they were doing well. If they were alive today, I would be the first to buy them an Apple Watch to help them monitor their health. Today’s technology has advanced so much that using Apple Watch and other fitness wearables as a tool to monitor aging parents health is more than possible and I believe that it will become a significant market segment for makers of health and location tracking wearables to target.