Black History Month: Another Reminder Tech Needs Diversity

It would come as no surprise that tech is a big part of my life, not just my job. As such, many books around the house, podcasts I listen to, documentaries I watch are tech-related. If you read my article ‘What “Hidden Figures” Can Teach Us about AI’ or follow me on Twitter you also know I have a 9-year-old daughter who is mixed race. So, as a mom, I always try and make sure my girl has role models for her gender and ethnic background. When it comes to tech, however, finding names of black leaders is still not that easy.

Let’s Look at the Numbers

The most recent Apple Inclusion and Diversity Report shows black employees make up 9% of the current workforce and 13% of new hires. When looking at leadership, however, that 9% drops to 3%, a number that has not changed since 2014. At Google, black employees represent only 2% of both the overall staff and the leadership. At Microsoft, 3.7% of the employees are black and only 2.1% are in leadership positions. Amazon’s workforce is only 5% black (I could not find any information on how blacks are represented in leadership roles). At Facebook, black employees represent 2% of overall employees and 3% of senior leadership. Twitter, that just last week lost its diversity chief, had recently shared its diversity numbers showing the percentage of black employees in its workforce had remained the same as in 2015: 3%. This was after a very public diversity pledge.

It’s hard for me to look at these numbers and feel encouraged about how inclusive tech really is and what opportunity my daughter will have in it.

The Diversity Wheel

The Diversity Wheel was created by Marilyn Loden in the 1990s to better understand how group-based differences contribute to people’s social identities. There have been several iterations of the diversity wheel but the most common is made of three circles:

  1. Internal Dimensions – age, gender, physical ability and race. These dimensions are usually the most permanent and they are also the most visible.
  2. External Dimension – marital status, work experience, income
  3. Organizational Dimension – management status, work location, work field

The latter two circles represent dimensions acquired over time and can also change over time.

Educational background is one of the external dimensions that contributes to people’s social identity. A recent report by Georgetown University said that, while the number of African-Americans going to college has never been higher, African-American college students are more likely to pursue majors that lead to low-paying jobs. Law and public policy is the top major for African-Americans with a Bachelor’s Degree. The highest paying major among African-Americans is in health and medical administration. The second lowest paying major among African-American is in human services and community organization with median earnings at $39,000. African-Americans only account for eight percent of general engineering majors, seven percent of mathematics majors, and five percent of computer majors. Even those who do major in high-paying fields typically choose the lowest paying major within them. For example, the majority of black women in STEM typically study biology, the lowest paying of the science discipline. Among engineers, most black men study civil engineering, the lowest paying in that sector.

A very interesting point the report also raises is that African-Americans who have strong community-based values enter into college majors that reflect those values. Despite comprising just 12 percent of the population, African-Americans are 20 percent of all community organizers.

Incorporating elements of community service into careers in tech, business and STEM will increase the appeal to Africa-American students and will be a way for tech to be more visible in those communities. This can become a positive circle of evangelization but needs to start with black students seeing the opportunity first.

Diversity is the Nation’s Unfinished Business

How do you break the cycle first? How can my little girl be inspired to be in tech if she does not see enough peoplelike her, not just in tech, but people being successful in tech? Chief Diversity Officer at Case Western Reserve University, Dr Marilyn Sanders Mobley, refers to diversity as “the nation’s unfinished business”. When it comes to tech, it certainly is the case.

The recent focus on immigration have had many comment on how diverse Silicon Valley is. You only need to stroll down Mountain View to bump into Chinese, Koreans, Europeans, Indians. But this only means Silicon Valley is international, not diverse. Dr. Sanders Mobley says you cannot address what you cannot acknowledge and it starts with acknowledging blind spots. Here is the first one: internationalism and diversity are not one and the same.

Another important point Dr. Sanders Mobley highlights is that, when it comes to fostering diversity in the workplace, there is a need for affinity and employee resource groups. Not everybody will use them or need them but they are necessary to provide a sense of belonging.

So it starts with empowering students to enter the workplace aiming for better paying jobs, aiming for management and leadership positions and then creating a work environment that fosters a sense of belonging. Kimberly Bryant’s effort with BlackGilrsCode is a great example of how to plant the seed with kids, in this case girls, right at the time when they are starting to think about what they want to be when they grow up.

While black students are underrepresented in tech education, however, this is not the ultimate issue as there are still more black students graduating than there are currently working in tech. How is that possible? Mostly because the recruiting process is broken. Silicon Valley often looks within itself. Employee referral programs are very common and recruiters, who often do not have any coding or engineering expertise, tend to rely on Ivy League universities and large tech names like Google and Apple as a measure of a candidate’s ability. Then there is a hiring bias. Blind resumes like the ones that Blendoor offer help in making a candidate visible to the recruiter but do not necessarily guarantee an interview, let alone a job.

Widening the pipeline, changing recruiting techniques and increasing awareness of bias will all help to solve what is the ultimate issue in attracting a diverse workforce: nobody wants to be a tick in the box of a diversity report. It is hard to attract a diverse workforce when the current mix of the company is predominantly white and male. It is even harder for a black kid to think he or she can be the next Steve Jobs.

Modern Workplaces Still More Vision Than Reality

We’ve all seen the images. Happy young employees, working productively in open-air workspaces, easily collaborating with co-workers and outside colleagues all-over the world, utilizing persistent chat tools like Slack to keep on top of all their latest projects and other efforts. It sounds great, and in a few places in Silicon Valley, things do work that way—at least in theory.

But at most companies in the US (and likely the rest of the world), well, not so much. It’s not that companies aren’t looking at or starting to use some of these new communication and collaboration technologies. Some are, but the deployment levels are low at less than 30% overall; plus, employee habits haven’t really changed in many places.

Such are the results from the latest study on workplace trends completed by TECHnalysis Research. The study is based on a survey of 1,001 US-based working adults aged 18-74 at medium (100-999 employees) and large (1,000+ employees) companies across a range of industries. The survey goal was to understand how the modern workplace is evolving in terms of how and where people work, as well as the hardware, software, services and capabilities that employees expect from their employers.

I wrote about some of the surprising results regarding work habits and locations in a previous column called “The Workplace of the Future” but for this column I’m going to focus on some of the big picture implications of the research, as well as some technology-specific trends.

The key takeaway is that both technologies and habits rooted in the 20th century are keeping the 21st century vision of the modern workplace from becoming reality. For example, despite the appearance of modern communications and collaboration tools, it’s the “old school” methods of emails, phone calls and texts that make up 75% of all communications with co-workers. There are certainly some differences based on the age of the employee, but even for workers under 45, the number is 71% (emails and voice calls make up 58% for that age group).

From a device perspective, the most common tool by far is not a smartphone, but a company-owned desktop PC, which is used for just under half (48%) of all device-related work. (For the record, personally owned smartphones are only used for 7.5% of total work on average.) Partially as a result, some version of Windows is used for rougly 2/3 (65%) of all work, with Android at 11%, iOS at 10%, and the rest split among cloud-based platforms, Macs, Linux and other alternative options. Arguably, that is a drop from the days when Windows owned 90%+, but it still shows how dominant Microsoft is in the workplace.

Open air environments have received a great deal of attention and focus in modern workplaces, but there’s a potential gremlin in that future work vision: noise. In fact, in about 25% of outside the office alternative or shared workspaces (such as WeWork) and in 20% of inside the office alternative or shared workspaces, noise was cited as having a serious impact on productivity. Given these numbers, it’s not terribly surprising to see reports suggesting that some of these experiments in workplace flexibility are not working out as well as hoped.

From a conference room perspective, basic audioconferencing, guest WiFi, and wireless access to projectors (or other displays) are the most widely available services, but when asked which of these capabilities offers the greatest quality and utility, the story was very different. Modern tools such as HD videoconferencing, large interactive screens (a la Microsoft’s Surface Hub), electronic whiteboards, and dedicated computing devices designed to ease meeting collaboration(such as HP’s new Elite Slice, based on Intel’s Unite platform), scored the highest satisfaction levels, despite their currently low levels of usage. In other words, companies who invest in modern collaboration tools are likely to find higher usage and appreciation for those devices.[pullquote]Companies who invest in modern collaboration tools are likely to find higher usage and appreciation for those devices.”[/pullquote]

From a software perspective, it seems that old habits die hard. Emailing documents back and forth is still the most common methold of collaboration with co-workers at 35%, while the usage of cloud-based storage services is only 8% with co-workers and 7% with colleagues from other organizations. Similarly, real-time document collaboration tools, such as Microsoft’s Office 365 and Google Docs, which have now been available for several years, are only used with co-workers for collaboration purposes by 19% of respondents.

Modern forms of security, such as biometrics, are another key part of the ideal future workplace vision. In current-day reality, though, biometric security methods are only used 15% of the time for corporate data, 14% for physical facilities, and 12% for access to either corporate-owned or personally owned devices. Surprisingly, 41% of respondents said their company does not have any security policy for personal owned devices—yet those personal devices are used to complete 25% of the device-based work that they do. No wonder security issues at many organizations are a serious concern.

The tools and technologies are already available to deliver on a highly optimized, highly productive workplace of the future, but, as the survey results show, there’s still a long way to go before that vision becomes reality.

(If you’d like to dig a bit deeper, a free copy of a few survey highlights is available to download in PDF format here.)

Snapchat Spectacles and Making Memories

I recently acquired a pair of the elusive Spectacles by Snap Inc. the parent company of Snapchat. While not the most stylish design, the best way to describe them is whimsical, playful, fun, and entertaining. Phrases which I also believe are the best way to understand Snapchat and Snap Inc. as a whole. Strategically, Snapchat is positioning themselves as a camera company. However, it really is not as simple as that when you dig into what Snap Inc is up to.

An important element to understanding a typical Snapchat user is the vast majority of its user base is equally both a content consumer and content producer. This dynamic is somewhat similar to Instagram but quite different from Facebook where more users are consumers than producers. A highly engaged producer of content, both in videos and photos in Snapchat, is key to the services future. In this light, Spectacles make a great deal of sense. Your smartphone is no doubt an amazing capture device. However, your smartphone is not always handy. The value of having a camera on your face, in this case in sunglasses, is the ease and convenience of instant capture. While not the biggest Snapchat user (certainly not like my teenage daughter), my contribution of video to stories on Snapchat has dramatically increased thanks to Spectacles. Increasing the amount of video created by users lies at the center of where Snap Inc. is going as a camera company.

Part of the clear upside with Snapchat is the number of hours its users spend consuming video content. This remains a clear benchmark Snap’s management uses and is, amazingly, on par with Facebook’s number of video views but with roughly a tenth of the daily active users. Video is central to Snapchat’s upside. The playful, whimsical, and entertaining nature of the videos created by its user base is also key to its differentiation. While I don’t expect Snap Inc. or Snapchat the service to break from this fun and entertaining focus, there is a broader point about Spectacles and my experience with them that I think is worth making. The camera in your smartphone remains one of the most important features year over year consumers gravitate to. I’d argue this is not because of the picture taking capabilities and but more subtly about the memory making capabilities.

Making Memories

I was an early adopter of GoPro cameras. I live a relatively active lifestyle and being able to create video underwater, snowboarding, biking, etc., was extremely appealing to me. Most of those use cases don’t make it convenient to hold your smartphone while you take video of yourself plunging down a mountain. So, the ability to strap a capture device to your body, turn it on, and go have fun made a lot of sense and still does today. The side benefit of the GoPro I did not realize until I owned one was the role it would take in making memories, not just of some crazy stuff I did but of my family.

I was that guy. The dude who strapped a GoPro to his head and walked around Disneyland with his family.

Yes, I got strange looks from people but I didn’t care. Getting great memories of my girls’ first time on a roller coaster or skiing or ice skating for the first time was, and still is, worth it. When you have a first person capture device on you, you realize something profound when you use it in the memory-making context. In my experience, when using it for birthday parties, Disneyland, and other key moments I want to remember, a device like a GoPro and Spectacles (in concept) allows you to remain present when the moment is unfolding. Who wants to watch all of their kid’s firsts through the screen of your smartphone camera? With a GoPro and now, with Spectacles, you can watch the moment as it happens and be totally present in it but still capture it on video for all of eternity. This is the broader opportunity for a less invasive camera that we have in our glasses, on our head, or wherever it may end up in the future.

What has gotten better over the years, as GoPro has evolved and even more with Spectacles + Snapchat, is the ease to go straight from memory capture to sharing/saving. I’d argue the experience with Spectacles + Snapchat is the most seamless I’ve used yet with a device that isn’t a smartphone. With a GoPro, it could take me several minutes to get a video I just took, add some slight editing, and share it, With Spectacles, it takes seconds since the video is quickly synced with your smartphone and available in the app to edit and share. The great thing about Spectacles is they truly function like an extension of your smartphone camera that seamlessly integrates back into the software. This is an area where I feel there is a broader opportunity for companies, Apple and Facebook in particular, and perhaps Google to continue to explore.

While the smartphone will remain a primary capture device for some time, capture accessories that become extensions of our smartphone camera, like a GoPro or Spectacles, make a great deal of sense when done right. Particularly with things like virtual and augmented reality experiences in the future where we can relive memories in virtual reality or simulate being present at a sports game or event in another town without having to be physically present. In most cases, these capture devices will not be your smartphone and will most likely come from companies perfecting the optics, silicon/sensors, design, and software today. Which is a key reason, Snap Inc., in making their key mission to be a camera company, is so interesting.

Podcast: Android Wear 2.0 Smartwatches, Android-Enabled Chromebooks, Oculus-Best Buy

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell discuss the release of Android Wear 2.0-based smartwatches and the state of the overall wearable industry, analyze the potential impact of forthcoming Chromebooks from Samsung and others that directly support Android apps, and debate what the closing of hundreds of Oculus VR demo stations at Best Buy stores means for the VR market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

This will be a Big Year for Wireless Network Innovation

2017 is shaping up to be a year where we will see more new things out of wireless networks than we have in some time. Some of this will be in the form of select market trials while, in other cases, these will be new services offered by cellular operators and even some wireless upstarts. There are four themes to this: the first commercial 5G trials, centered around fixed wireless access; testing higher parts of the spectrum band to deliver wireless service; new techniques to deliver faster services and increased capacity, such as leveraging power lines and the unlicensed bands; and new types of networks or approaches, such as LTE-Unlicensed, 3.5 GHz ‘shared spectrum’ services, the FirstNet public safety network, and IoT-centric networks using LTE.

5G Trials

We are still a good two years away from the official 3GPP standard being released for 5G, although there will be some steps along the way. Even so, Verizon and AT&T have both announced they will conduct 5G trials this year in several cities. Mainly, they are testing 5G for fixed wireless access, as a potential broadband alternative in markets where they don’t currently offer broadband. The other important aspect is the operators are testing millimeter wave spectrum for 5G – that is, higher spectrum bands that can deliver ultra-fast speeds but have more challenging propagation characteristics, such as requiring line-of-sight.

These 5G trials are not restricted to the cellular operators. For example, Starry, founded by Aereo CEO Chet Kanoja, plans on testing broadband via fixed wireless access using the 28 GHz band in Boston and several other cities this year.

Keep in mind the difference between ‘real 5G’ and ‘marketing 5G’, or pre-5G. These initial tests might deliver ‘only’ 400-500 Mbps, whereas true 5G is focused on 1 Gbps or better, and much lower latencies.

New Techniques to Deliver Faster Services and Greater Capacity

There’s a lot going on in this corner. After nearly two years of delays, Unlicensed LTE, in the form of LTE-U, is set to roll out this year, with T-Mobile and AT&T leading the charge. This technique augments licensed LTE spectrum with channels in the 5 GHz unlicensed band (used by Wi-Fi) to deliver faster speeds and greater capacity. It’s a win for the cellular operators because they don’t have to buy extra spectrum to use the unlicensed band. While there is contention within the Wi-Fi community because of concerns about interference, the two sides have finally agreed on techniques to minimize the risk.

This will be an interesting test of the ‘coexistence’ of licensed and unlicensed spectrum and the potential for new and creative business models around a differentiated service. Most mobile data services today deliver the same speed to all users, at least in theory. Operators might test the potential for a ‘premium service’, such as speed or capacity boosts, using LTE-U. There are several other approaches to combining licensed and unlicensed services in the works as well, such as LAA, LWA and MulteFire. So LTE-U will be an important litmus test.

LTE Roadmap

Even though there’s already plenty of pre-5G marketing, the LTE roadmap for the next couple of years looks pretty compelling as well. Operators are using additional spectrum they have acquired and deployed to deliver additional channels of carrier aggregation. The most advanced, three-channel carrier aggregation (“3CA”), continues to be rolled out in select markets.

You will also hear more about “4.5G”, or “LTE Advanced Pro”, which employs 20 MHz wide radio channels, carrier aggregation, and advanced antenna techniques to deliver additional capacity and download speeds of 400 Mbps or more. Ironically, this is what is being discussed for some pre-5G services. In fact, per my earlier point, LTE Advanced Pro services might be marketed as early 5G, in the same way WiMAX and HSPA+ services were marketed as 4G even though they weren’t officially LTE.

Finally, AT&T will also be testing AirGig (a technique the Bell Labs folks have been working on for ten years) that uses plastic antennas over power lines regenerating millimeter waves to deliver gigabit speeds. The potential is for a more flexible and cost-effective last mile solution which would have application for both broadband and wireless (for 5G, and backhaul) services. This also leverages existing infrastructure, as finding locations to deploy antennas or small cells has proven vexing for service providers.

New Type of Wireless Networks

During 2017, we will see initial tests and deployments of several new types of mobile networks. During 2016, the FCC issued an order for shared spectrum services, using the 3.5 GHz band (see here for more). Sometime over the next few months, rules and procedures about Spectrum Access System (SAS) will be decided and administrators chosen. During 2017, we could well see some market tests or trials of shared spectrum services. This is an area where the US could really lead in wireless. The proposal for 5G also relies on shared spectrum techniques for some of the millimeter wave spectrum bands.

Also on the LTE front, the provider of the FirstNet public safety network should be chosen within the next couple of months, pending some litigation currently underway. We should see some early FirstNet deployments this year, with a more comprehensive rollout in 2018. More than $7 billion has been earmarked for FirstNet, using proceeds of past spectrum auctions.

Finally, there will be a lot of action related to purpose-built IoT networks this year. Networks using the unlicensed band, such as Sigfox, Lora, and RPMA, are being rolled out. During 2017, the cellular operators are getting into the action, launching IoT networks over LTE (LTE Cat-M and NB-IoT, see “The Emergence of Purpose Built IoT Networks“).

In short, this should be an exciting year for wireless network innovation.

Whither Wearables?

This week’s launch of Android Wear 2.0 and several new smartwatches designed to showcase the new operating system is a useful reminder of the state of the overall smartwatch market, which appears to have struggled outside of the Apple Watch. But it also serves as a useful prompt to consider the general state of the wearables market, which was supposed to be the hot thing just a few years ago but seems to be fizzling now. What is the true state of that market and where will it go from here?

Still Grading Smartwatches on a Curve

It’s illuminating to go back roughly two and a half years to mid-2014, when I wrote a piece about smartwatches being graded on a curve. Most of the smartwatches on offer then were poorly designed, had limited functionality and, consequently, sold in very small numbers. Yet they were generally given pretty decent ratings by the major gadget blogs anyway. At that point, smartwatches really weren’t selling and the reasons were fairly obvious.

Then along came the Apple Watch, instantly selling in much larger numbers than other smartwatches, taking half or more of the unit shipments and the vast majority of the revenues, as well as jumping into the top ranks of all watchmakers by revenue in its first year. I and a number of others had thought Apple’s entry would prompt Google and its OEM partners to up their game and compete more effectively, much as smartphone vendors did following the launch of the iPhone and tablet vendors did following the iPad launch but that hasn’t really happened. Android Wear continues to account for only a small minority of total smartwatch shipments and actually shrank last year. Samsung and Fitbit have taken the second and third slots in the market, mostly without help from Android Wear.

This week’s announcements tweak some of the software facets but they also continue the bombardment of new features which is often characteristic of the Android approach – throw as many radios and other features into the device as possible and don’t worry about the effect on battery life or form factor. What results is a pair of watches from LG which offer either massive bulk and all the features except decent battery life or a slimmer profile but none of the new hardware features. Meanwhile, other Android Wear pioneers like Motorola are sitting this round out.

What we have today is not so much a smartwatch market but three distinct markets with three dominant players:

  • The Apple Watch, which really exists in a niche of its own as a premium fashion device and focuses on health and fitness tracking but also does notifications and glanceable information well
  • dedicated fitness trackers, a market increasingly dominated by Fitbit in most western markets
  • Samsung’s various wearables, many of which are bundled and/or heavily discounted with a smartphone purchase and therefore thrive on a different business model from the rest.

Everyone else basically falls into the cracks between those three markets, falling short of the products offered by the three companies that dominate them — that includes Android Wear.

The Broader Wearables Market has Fallen Short

If you were to go back just a few years, you’d have heard about wearables at every industry conference and seen lots of ambitious forecasts for the category – I found one from 2104 which predicted well over 100 million unit shipments in 2016 including five million smart glasses, 11 million wearable headsets, 15 million wearable cameras, 38 million fitness trackers, and 46 million smartwatches. But, of course, the market was nowhere near that in 2016. There were perhaps half that many smartwatches sold, almost that many fitness trackers, maybe five million VR headsets, and almost no smart glasses.

Why has the broader wearables category not taken off as predicted? Ultimately, it comes down to the fact excitement for the category was driven almost entirely by vendors of gear and components and not at all by the market. The jobs to be done by these new wearables were far from clear and the offerings in the market did a poor job convincing anyone they were worth having. Smartwatches have largely failed because they have borrowed the use case of the smartphone without explaining why performing those tasks was better on a smaller, less powerful device.

The exceptions – those wearable categories that have performed well – are those which have articulated a use case and executed on it effectively. Arguably, the single biggest use case has been fitness tracking. Even the Apple Watch, which does much more, has been most compelling as a fitness tracker and that was reflected in last year’s fall event, which focused on these features more than anything else. But Apple isn’t just making glorified Fitbits – it has also recognized that, when it comes to a watch, people want the device to perform one of the other jobs a watch does for them: look good on the wrist with a variety of outfits. That means a premium device with premium materials and a variety of options for the finish, colors, and so on. Apple has fleshed out the value proposition with additional features like some of the best notification management on a smartwatch and good use of the rectangular screen with complications.

Where Future Growth Might Come From

Having said that, the market served by fitness trackers and premium fitness-centric digital watches is still relatively small. This was never going to be a smartphone-sized market but it’s a heck of a long way from a tablet-sized market at this point and it doesn’t seem to be growing very much. This raises the question of whether it can ever be any bigger and whether there are other categories of wearables that could still find success.

I still believe the smartwatch has potential beyond its current incarnation but I don’t think the technology is ready yet. One of LG’s new watches, and a number of existing ones, feature 3G or LTE connectivity that allows them to operate independently of a phone but the tradeoffs are much like those in early LTE smartphones – they drain the battery and add bulk for relatively little additional value. That will change as the technology matures and becomes both smaller and more battery efficient. The ongoing miniaturization enabled by smartphones will continue to drive other technologies to perform better at smaller sizes and smartwatch CPUs will become more powerful and battery efficient as well. As that happens, we could see more compelling app models on smartwatches, which could finally lead to the kind of innovation that helped smartphones take off and which has been largely missing in the smartwatch space so far.

Until then, though, the most promising wearables categories are those which extend the functionality of the smartphone in some way. Some time ago, I wrote about the fact the smartphone would be increasingly extended in various ways as input and output methods and processing tasks were outsourced to other devices and services. Smartwatches are an example of this, putting input and output permanently out in the open on the wrist rather than buried in a pocket and providing new ways to see notifications, trigger voice assistants, follow navigation directions, and even carry on phone calls. But this extension needn’t stop with smartwatches – the next generation of audio technologies will further extend the smartphone into the ears, providing both audio feedback and input via built-in microphones. If you own an Apple Watch and AirPods today, you can keep your iPhone in your pocket much more, relying on visual feedback on the Watch and audio feedback and input via the AirPods exclusively for some tasks. We’ll see a lot more of this.

When it comes to visual interfaces, augmented and virtual reality have quite a bit of promise but, for today, are mostly occasional-use technologies rather than true wearables, used throughout the day in the way smartwatches are. Virtual reality allows for a much more immersive display than a smartphone ever could and, if headsets weren’t so bulky, they’d make for an interesting way to carry room-scale displays with you everywhere you go as another extension of the smartphone. Augmented reality, especially in a form factor resembling normal glasses, has great promise too, but we’re likely several years from that future as well.

Again, those wearable categories that succeed will do so by solving real pain points or providing real additional value, not by merely sticking a set of smartphone-based components into a new form factor and slapping a logo on it. That approach has characterized too much of what we’ve seen in the wearables category so far and it won’t work any better in the future than it has in the past. But it’s going to be a slow build to meaningful numbers in all these categories, not a massive explosion along the lines of what we’ve seen in previous categories.

Apple Watch + AirPods: Show Me the Magic!

Last week, Apple released its Q1 FY2017 earnings and announced its highest quarterly revenue yet as well as all-time revenue records for iPhone, Services, Macs and Apple Watch. During the quarter, Apple sold 78.3 million iPhones prompting, once again, a discussion on Apple’s strong dependency on the iPhone. While this comment is fair and one Apple is well aware of, the comments that Apple has failed to bring to market another product with the same broad appeal of iPhone, commanding the same premium, is less so. It is less fair, not because it is not true but because such commentary fails to account for the fact no other single device is likely to have the impact smartphones have had on technology and, more broadly, on our lives.

Another common argument shared by some Apple critics is that the inability to deliver a killer product rests solely with Tim Cook. When we consider the two new lines of products Apple brought to market under Cook — Apple Watch and AirPods — I struggle to see how people could honestly believe Cook is failing.

Apple Watch and AirPods are very different products that have a lot to offer Apple as a brand, both as standalone products but, even more so, when they come together.

Apple Watch Gives Back What You Put In

I have been wearing an Apple Watch every day since it first came out. Yet, whenever people ask me if I love it, I hesitate to say I do because it is hard to explain why. Apple Watch gives back what you put in. You need to invest some time in setting up your preferences when it comes to notifications, pick your apps, buy into fitness, and add your credit cards. Most importantly, you need to trust Apple Watch to pick up some of the responsibility you have given to your iPhone for so long. When you do so, Apple Watch becomes a trusted companion you will not easily go without.

The problem Apple Watch is facing is that it did not reinvent the smartwatch category — it improved it. And, as consumers remain unclear on what role smartwatches play, it is hard for many to understand the value Apple Watch could bring to their connected life. In a recent study we ran at Creative Strategies, we asked US consumers if there was a tech product they purchased or received as a gift they liked more than they thought they would. When we looked at what device Apple Watch owners mentioned, if any, we found 53% said Apple Watch, proving there is certainly a return on investment in the product. Across all early tech consumers, however, only 9% mentioned Apple Watch as the device they liked more than they thought.

Over the past few months, with the arrival of Apple Watch Series 2 and watchOS 3, we have heard Apple compare Apple Watch’s performance to the watch industry and not just because it makes the numbers look better. Apple understands the real magic is what mainstream consumers find in Apple Watch as an upgrade from a traditional watch rather than what early tech adopters might find in comparing Apple Watch to previously owned wearables as the above data suggest. Data aside, if we consider how Apple is dominating the smartwatch market and how competitors are moving more and more to make their smartwatches look like a traditional watch, it seems natural to use that market as a measure of comparison. As John Gruber said: it is time to consider Apple Watch as a watch.

Apple AirPods, Practically Magic!

This is the slogan of Apple’s AirPod commercial and, if you ask anyone who has tried them, they will agree. The feeling of magic is not because the user is aware of Apple’s unique approach of having two separate streams of music play simultaneously into each AirPod. The magic is delivered as soon as you pair your AirPods by simply taking away any pain previously inflicted by Bluetooth-enabled headphones requiring you to pay attention to flashing colored lights while pressing odd buttons. The initial ease of use carries over into everything you do as you let Siri work its way into your ears.

In the study mentioned earlier, among the early tech adopters who said there was a product they liked more than they thought, 38% mentioned Apple AirPods. This number is even more telling when you consider they refer to early tech buyers where most of the purchases (91%) are occurring today.

While overall performance is great, I do strongly believe most users are buying first and foremost into the magic and will strongly recommend AirPods based on their visceral experience. The AirPods magic is also what has prompted some commentators to say Apple got its groove back and AirPods are the kind of product Steve Jobs would have done. So does magic sell more?

Magic Might Be Short-lived, Usefulness Rarely Is

No, magic does not necessarily sell more products but it makes it easier to sell. Instant magic will make for a product that sells itself but such products might have to be conceptually simpler in the experience it delivers in order for the magic to work. You know how to use headphones. There is very little you need to learn in order to appreciate AirPods and what is appreciated is common across the user base. This helps tremendously with user promotion, something consumers look for more and more when researching what products to buy. Other products, like Apple Watch, are more complex in the value they deliver because users will appreciate different features. What I might see as magic, someone else might not. This makes for a more complex sales process in the store as well as in the marketing message. Yet, the engagement the user will have with the product will not be in any way less meaningful. A way around this complexity could be to focus on a feature with broad appeal and turn that into magic. Apple is currently doing exactly quite successfully with the “depth effect” on iPhone 7. Most iPhone users use it as their main camera and get a visceral gratification from the depth effect.

What is particularly fascinating about Apple Watch and AirPods is that using them together allows them to feed off of each other’s strength to deliver a whole new kind of magic. I strongly believe more and more of Apple’s future success will be built on the magic of devices working together at home, at work or in the car.

The Missing Map from Silicon Valley to Main Street

Regardless of where you sit on the political spectrum, the maelstrom created by the last US presidential election uncovered a painful reality for the tech industry: a striking gap between it and much of mainstream America.

It’s not that Americans of all socioeconomic levels aren’t using the output of the tech industry. From smartphones to social media and PCs to online shopping, US citizens are, of course, voracious consumers of all things tech.

The problem is a serious lack of empathy and understanding from people who work within the tech industry to those outside their rarified milieu. To its credit, the tech industry has created enormous amounts of wealth and many high-paying jobs. Very few of those jobs, however, are relevant or available to a large swath of the US population. While I haven’t seen any official breakdowns, I’m not aware of many middle income jobs (according to US Census statistics, the average US family income in 2015 was $55,755) in the tech industry. Heck, interns at big tech companies often get paid more than that.

Not surprisingly, that kind of income disparity is bound to create some resentment. Yes, on the one hand, the significantly higher salaries often found in tech jobs do make the goal of working in tech an attractive one for many who aspire to break into the field. But not everyone can (nor wants to) work in tech.

A functioning society, of course, requires people to work across a range of jobs and at a range of income levels. But, it does seem rather disconcerting that an industry that is responsible for driving so much growth across the economy, and that houses the most well-known and well-respected brands in the world, does so little to employ people at mainstream income levels. For all of its focus on social justice and other progressive concerns, the tech industry displays a rather shocking lack of interest in economic inclusivity, which is arguably at the very heart of a just society.[pullquote]For all of its focus on social justice and other progressive concerns, the tech industry displays a rather shocking lack of interest in economic inclusivity, which is arguably at the very heart of a just society.”[/pullquote]

Of course, fixing the problem isn’t easy. But it does seem like there are a few basic ideas that could help and a lot more “thinking different” that might be worth a try. For one thing, the fact that the tech industry notoriously outsources (or subcontracts) nearly every lower and middle-income job to another firm (all in the name of cost-cutting) needs to be re-examined. From bus drivers, to janitorial and security staff to, yes, manufacturing jobs, it’s high time to start making people who do work for a company, employees of that company, with all the rights and benefits that entails. Yes, it could negatively impact the bottom line (though, in the big scheme of things, not by very much), but it would be a tremendously positive step for many. All it takes is some fiscal stamina and a bit of guts.

In addition, the whole mindset of gig-based companies (such as Uber) needs to be reconsidered. Maybe the original intentions for generating a bit of extra income were good, but when millions of people start trying to build their lives around pay-for-hire work, it’s time to start making them the middle-income employees they’ve earned the right to be.

It’s also time to start thinking about packaging and selling technology-driven products in entirely new ways. There might be ways to start building entire new sub-economies around, for example, helping farmers grow their crops more efficiently through the use of sensors and other IoT-based technologies. In addition, building products or services that allow the creation of small businesses, such as a tech franchise, which could help other local small businesses with their tech devices and software. For example, someone who could help local bakers, restaurants, florists or shoe repair shops to run their businesses a bit more efficiently, but provides “door-to-door” service.

Part of the problem is that the tech industry has become so obsessed with only offering the latest, most feature-rich products and services through high-income jobs that they have lost sight of the fact that some people only need very simple “older” tech that could be delivered in a more modest manner through comparatively lower-paying jobs.

Rather than planning for a societal collapse, it’s time to start mapping out a more positive, productive future that links Silicon Valley to Main Street in a useful, meaningful way.

Privacy, Security, and the Mind of the Consumer

A few weeks ago, we decided to launch a US-based consumer study, focused on understanding how non-techie consumers think about both privacy and security. Our goal was to learn what consumers understand both these terms to mean, what core behavior changes do they make (if any) with products and services based on their privacy or security concerns, and which companies they trust more than others in both cases. Prior to launching the study, I looked at many different studies done by consulting companies, banks, and financial institutions, as well as government studies, to see what kind of questions had been asked before. I also spent some time interviewing consumers to hear how they talk about privacy and security when it came to different products and services. Even with all the prior work put in, this was still one of the hardest studies to get consumer participation. The difficulty of the subject matter itself, along with the high initial abandonment rate we suffered on the study, was a lesson in and of itself. We left comment boxes in certain places of the study and, quite frequently, consumers felt they weren’t adequately informed enough to participate, didn’t have strong opinions, didn’t want to think about it or just didn’t care. The open comments sections were some of the places we received the keenest insights into how consumers view these subjects. With with a few wording changes and adaptions to the study, we finally got enough people to complete it for it to be statistically representative.

Privacy and Security, Same or Different?
We broke the study out into two sections: one on privacy and the other on security. We asked consumers what they felt each term meant and left the same answer options for both questions. Below is the merged chart for both the privacy definition question and the security definition question.

As you can see, consumers felt the strongest definition of privacy was “Not selling personal data, or letting third parties access personal data.” When it came to security, consumers felt the strongest definition was “Secure and encrypt my data so no one can hack or steal it”. But, as our gut sense suggested, there is a fair amount of overlap in how consumers think about privacy and security with the same two answers being quite high in both questions. A core conclusion was, while privacy and security are two separate things, consumers tend to blend their understanding of them into the same definition. In the mind of the consumer, what is private is secure and what is secure is private.

Who Do You Trust With Your Privacy and Security?
Thanks to some other studies, I read quite a bit about how consumers trust things like government and financial institutions. We wanted to look at some of the bigger names in tech and social media as a start.

Apple and Microsoft were nearly neck and neck for the top spot of consumer trust when it came to privacy. Apple squeezed out the top spot overall and, not surprisingly, the top spot among iPhone owners. Microsoft was the most trusted company with privacy by Android owners followed by Apple. Google was in a solid third place regardless of age and smartphone owned, followed closely by Amazon and then Samsung. We asked consumers to rank these companies with a “1” being the most trusted and “8” being the least trusted. Facebook came in at 5.7 followed by Twitter at 6.2 and lastly, Snapchat at 6.7. Interestingly, the ranking did not change much even when we looked at younger consumers 18-25, who are within the Snapchat demographic. Snapchat moved up to the 7th place with this demographic and Twitter was last. Snapchat falling into last place overall is not surprising since a good portion of our respondents did not have a Snapchat account or use the service.

Here is the top line results on company rankings on privacy. The results for rankings with security were not much different.

Reading the comments about why consumers made some of the choices they did proved insightful. It is clear there is a understandable trade off consumers make when they use things which they know are more public, like Facebook. Consumers know what they post is open for anyone to see. Therefore, their feelings around privacy for these services are somewhat less strict. With companies where their actions and behavior are not public like Apple, Microsoft, and Google, they seem to embrace a higher degree of trust since what they do on their phones, PCs, and even what they search for, is not publicly tied back to them as an individual the same way what they do on social media can be tied directly back to who they are. This became clear when we examined behavioral changes they make on social media. The top answer was to be more intentional and careful about what they share/post on social media.

Google was an interesting one for us to explore. We created a few questions just around Google and what consumers believe Google knows about them and what they don’t. While most consumers use Google’s search, they acknowledge the creepiness factor of when you are on a different website seeing ads for things you searched for on Google. Interestingly, while Google was the third most trusted company in both privacy and security rankings, 52% of consumers said they really have no idea how much Google knows about them.

Privacy and Security Fanatics
We know there are some hard core consumers with very strong feelings about their privacy and security and, until now, we didn’t know what percent of the market these consumers made up. We asked some specific questions to help us narrow the field to those who are the most privacy and security conscious. For example, 20.3% of our respondents said they cover their device’s camera with a piece of tape. 13% said they have installed privacy enhancing plug-ins in their devices browser. 15% installed some kind of security software on their smartphone. 11% specifically switched their text/messaging service to one they consider more private and secure. We asked many more to narrow this down but, in each instance, we did not see responses go above the 20% mark. Which leads me to believe the percent of US-based consumers who are the most privacy and security conscious make up around 15-20% of the market, approximately. This demographic tends to skew older — 50+ and heavily female.

While not a large group, it is helpful to get an idea of the size of the market for more privacy and security conscious consumers, especially as more companies are looking to sell products and services with a heavy emphasis on these issues.

As I mentioned at the start, the most interesting takeaway was the difficulty of the subject matter and that it is a difficult topic, one where there is more uncertainty than certainty. I am convinced that any company’s message that over-indexes on the privacy or security angle will only resonate with a portion of the market. Still, I encourage companies to keep pushing both privacy and security forward on behalf of the consumer simply because it is the right thing to do. Consumers will appreciate it, even if they don’t fully understand, or care, about all that is involved.

Collaboration Among the Tech Giants

One of the interesting dynamics to observe in the tech industry is the reputations and relations companies have, not only with the public but also among other companies, including their peers. For the former, Apple, Google, Uber, Microsoft, Amazon, Samsung, and others enjoy very positive relations with their customers. We interact with many of these companies and their products multiple times a day, whether it’s to make a purchase, use one of their devices, or access their services. It’s become so routine, we wonder how we ever got along without them.

But the dynamics vary more widely when we look at how these companies relate with their peers, partners, and competitors, at least from what we are able to see publicly.

Google, Uber, and Amazon, as examples, seem to enjoy strong relations with other companies that have come to depend on them for growing their commerce and selling their products. Amazon has built a huge business with their AWS (Amazon Web Services) that depends upon strong business relationships. Similarly, it’s now working with scores of companies to integrate their Alexa voice technology. Uber and Google have strong relationships with automobile companies collaborating in the development of autonomous cars. Each company benefits from these collaborations, even if they compete in other areas.

Apple, on the other hand, seems to be different. Whether it’s intentional or a result of their history of preferring to do everything themselves, they seem to have more difficult relationships with other companies and do far fewer collaborations. They seem, at times, to be less likely to be trusted, not unlike Microsoft of many years ago. Rarely do you see Apple working in collaboration with Google, Amazon or the car companies on really big initiatives. Apple Mail struggled to work well with Gmail and it took more than a year of haggling back and forth to fix it.

Even with suppliers, the relationships can get very rocky and public, namely their fights with Google, Samsung and now Qualcomm, the latter two suppliers to Apple of critical elements of their products.

Apple’s struggle to work closely with the automobile companies to integrate the iPhone has been far less successful than what they originally set out to do — building the entire user interface for the automobile companies. While the Apple CarPlay integration has found good acceptance along with Google’s Android Auto, the auto companies see them as interim solutions until they can develop their own.

Apple’s inability to build Apple TV into a stand-alone service to compete with the cable companies, a goal once stated, has also failed to materialize, likely a result of the entertainment companies not trusting Apple after what they did to the music business.

Perhaps this is intentional or a reflection of their history of being so innovative and adverse to what other companies were doing. It’s the walled garden approach that’s worked so well for their products. I’m not suggesting what they do is wrong. It’s just different in an industry that’s full of competitors that are also collaborators.

Facebook’s Next Big Opportunity

Facebook once again reported very good earnings on Wednesday afternoon, with massive growth in both user numbers and ARPU driving both significant revenue and profit growth. But Facebook also warned of increasing saturation in ad load, which it says will lead to much slower overall ad revenue growth later this year. In that context, it’s worth thinking a little about what other opportunities for revenue growth still lie ahead. Mark Zuckerberg provided something of a hint on the earnings call.

As it currently stands, Facebook is still primarily a social network, albeit one that’s at least as much about content as it is about communication with friends and family. The experience is dominated by a variety of content, from “organic” text posts, photos, and videos shared by its users to a plethora of other content which originates elsewhere and is merely forwarded on by users of the service. That provides plenty of content for users to consume, to the extent Facebook has long since filtered the total universe of content that could be shown to users according to algorithms which prioritize those things likely to drive interest and engagement and therefore keep users on the service to be shown more ads.

However, the fundamental rule on Facebook is still you only really see things your friends have engaged with in some way, whether by actively sharing them or merely liking or commenting on them. Facebook still uses engagement by your friends as an important signal about whether you’re likely to be interested in something too. Your friends are the filters here, along with your own established preferences, both explicit and implicit, about what you’re interested in.

But what if your friends were no longer a filter or limiting factor on what you could be shown? What if other factors could teach Facebook both what you’re interested in and which other pieces of content shared elsewhere on Facebook might be interesting to you? Mark Zuckerberg talked on the earnings call about some of Facebook’s AI efforts aimed at recognizing and understanding the content of not just text posts but photos and videos too. Once Facebook understands the content, it can make a judgment about whether it might be of interest to a given user who has previously engaged with similar content and show it to them, regardless of whether it’s been shared by a friend.

The big advantage for Facebook is it would no longer be limited to the things friends have shared or engaged with – it can show you anything from among a much wider universe of possible content, much of which might actually drive higher engagement because of the subject matter than the things shared by friends. Perhaps you have hobbies distinct from those of your real life friends which are nonetheless shared by many others on Facebook. Perhaps your political views are different from many of those you’re connected to on Facebook. Your personal interests could connect you to a lot more content on Facebook if your friends are no longer a primary filter.

Where this gets particularly interesting is video, which is a key focus for Facebook and one with potential to drive significant additional ad revenue. YouTube has never been limited to showing you content which has only been shared or engaged with by your Gmail contacts. Opening up in this way would allow Facebook to act a lot more like YouTube in showing you recommended videos from outside your personal social graph. Better targeted content, especially content with lucrative ads attached, could drive even higher revenue without necessarily increasing ad load significantly.

But new video-centric experiences could also provide entirely new places for Facebook to place ads and therefore, raise the ceiling on ad load. Facebook is apparently working on an app for Apple TV and similar boxes which would be video-centric – a video service, free of friend-based filtering, could be a great fit for such a service, surfacing the best videos Facebook has to offer based on your interests, combining user-generated and professional content.

Though Facebook has repeatedly warned about saturating ad load, it’s clear it hasn’t given up on finding new places to put ads – mid-roll video ads, ads in Instagram Stories, ad experiments in Messenger and more over recent months demonstrate its commitment to finding new slots to load with ads. This video push seems another obvious way to raise the ceiling. As such, even with ad load slowing growth in 2017, I think there’s still plenty of room for longer term growth, especially around video.

Is not being Google a Competitive Advantage?

For any major tech battle, we have always had a few names that dominated the field. Some have survived over the years and moved from battle to battle as new names joined in. Artificial Intelligence is the latest battleground — from digital assistants all the way to autonomous driving. While we are still very much in the “recruiting soldiers and training for battle” phase, there are certain companies we are looking at when assessing the market and the progress made: Amazon, Apple, Google, and Microsoft.

Most of the work happening today is on building the foundation for the future. Companies are busy gathering data, training networks, building ecosystems and it seems some brands are also trying to establish themselves as an alternative to names some already see as winners. They are doing that by looking at the bigger picture and trying to build a platform that will become the cornerstone of areas such as voice-first and location.

The thirst for new names of platform providers comes from the desire to not to put all the eggs in one basket and to hedge the bets as to what or who will win in the end. Vendors who embraced Android find themselves struggling to hold hardware margins and differentiate on services, all while competing with their platform provider for the most engaged and lucrative users.

Amazon’s AI Platform

We discussed in our post-CES podcast how this year at the show, AI was everywhere and, when it came to the home in particular, Amazon’s Alexa was everywhere. Amazon was early at the gate with its Amazon Echo products and established Alexa as our primary digital assistant. It did not take long, however, to see Amazon’s interest was much broader than that. Amazon wanted to establish Alexa as the preferred voice-control platform. Amazon did not have a phone to build Alexa on and the home seemed like the most reasonable place to start, given that, in the home, we tend to use several devices and not rely entirely on our smartphones. The location was right and so was the timing, as Amazon rode on the bet many early tech consumers were making on connected homes.

Alexa’s skills have been growing quickly since the release of the API set allowing manufacturers of connected home devices to speak to Echo devices. But Amazon did not stop there. More recently, its Alexa Voice Service allowed vendors to actually put Alexa directly into their devices, taking them from “works with Alexa” to “Alexa inside.” Clearly, Amazon is not doing this out of the goodness of Bezos’ heart. Amazon’s ultimate goal is not selling the most connected speakers but rather becoming the de facto platform for AI in the home.

Let’s be clear. Amazon’s early success is not just because of how open the ecosystem is. Companies are looking at alternative options when it comes to partners and looking for openness but also predictability when it comes to bringing to market products or services in direct competition with theirs. It is early days to see if the trust vendors are putting in Amazon is valid but, for now, the e-commerce giant presents less of a threat to the many brands trying to play a significant role in the connected home and voice-first era.

HERE’s Location Platform

Outside the home, when we talk about AI we often talk about autonomous cars. In this area, we are even further away than in the connected home space from seeing the final impact of AI on transportation overall, not just self-driving cars. Yet, the groundwork done now is what will make everything else possible. When we are talking data gathering and network training in this context, map building is key.

If you tried to play the association name game with any of your friends when you said “map” I bet all of them would say “Google”. The search giant has been in the map business for many years and has won consumers’ preference. HERE is not new to the game either. The mapping and location service company was once owned by Nokia and was built on the acquisition of Navteq. At the end of 2015, HERE was sold to a car-maker consortium of Audi, BMW and Daimler. In late 2016, Tencent, Navinfo and GIC announced their plans to jointly acquired a 10% stake and, in early 2017, Intel acquired a 15% stake. HERE also announced strategic partnerships with Navinfo, Mobileye, Intel and Nvidia all surrounding the topic of maps for automated vehicles.

HERE’s ownership by a car consortium was the first signal that some car makers were starting to consider collaboration over isolationism driven by proprietary technologies. The need to build a broader data set and learn from experts in other areas, all in the attempt to beat tech companies such as Google, Apple, and Tesla to the finish line, has been growing and HERE has certainly benefited from this urgency.

While HERE maps have had somewhat limited traction with consumers, especially in the US, its location platform business has been popular in the enterprise space with big organizations such as Amazon and Microsoft listed as clients.

As carmakers, as well as municipalities, are readying for a self-driving world, I wonder just how much HERE’s core competence, as well as a business model not set to monetize from search and advertising, will play a role in deciding who the right partner is.

When it comes to voice-first and location, Amazon and HERE have transitioned from providing a service to providing a platform. They have done so at a crucial time when players who might be wondering if they can “beat Google” don’t necessarily just want to “join Google”.

The Network vs. The Computer

The history of the technology industry has seen several swings back and forth between dependence on a network that can deliver the output of centralized computing resources, to client devices that do most of the computing work on their own.

As we start to head towards the Gigabit LTE and then 5G era, when increasingly fast wide-area wireless networks make access to massive cloud-based computing resources significantly easier, there’s a fundamental question that must be asked. Do we still need powerful client devices?

Having just witnessed a demo of Gigabit LTE put on by Australian telco carrier Telstra, along with network equipment provider Ericsson, mobile router maker Netgear, and modem maker Qualcomm, the question is becoming increasingly relevant. Thanks to advancements in network and modem (specifically Category 16 LTE) technologies, the group demonstrated broadband download speeds of over 900 Mb/s (conveniently rounded up to 1 Gb/s) that Telstra will officially unveil in two weeks. Best of all, Gigabit LTE is expected to come to over 15 carriers around the world (including several in the US) before the end of 2017.[pullquote]Gigabit LTE is expected to come to over 15 carriers around the world (including several in the US) before the end of 2017.[/pullquote]

Looking forward, the promise of 5G is not only these faster download speeds, but also nearly instantaneous (1 millisecond) response times. This latter point, referred to as ultra low latency, is critical for understanding the real potential impact of future network technology developments like 5G. Even today, the lack of completely consistent, reliable network speeds is a key reason why we continue to need (and use) an array of devices with a great deal of local computing power.

Sure, today’s 4G and WiFi networks can be very fast and work well for many applications, but there’s isn’t the kind of time-sensitive prioritization of the data on the networks to allow them to be completely relied on for mission critical applications. Plus, overloaded networks and other fairly common challenges to connectivity lead to the kinds of buffering, stuttering and other problems with which we are all quite familiar. If 5G can live up to its promise, however, very fast and very consistent network performance with little to no latency will allow it to be reliably used for applications like autonomous driving, where milliseconds could mean lives.

In fact, the speed and consistency of 5G could essentially turn cloud-based datacenters into the equivalent of directly-attached computing peripherals to our devices. Some of the throughput numbers from Gigabit LTE are now starting to match that of accessing local storage over an internal device connection, believe it or not. In other words, with these kinds of connection speeds, it’s essentially possible to make the cloud local.[pullquote]The speed and consistency of 5G could essentially turn cloud-based datacenters into the equivalent of directly-attached computing peripherals to our devices. [/pullquote]

Given that the amount of computing power in these cloud-based datacenters will always dwarf what’s available in any given device, the question again arises, what happens to client devices? Can they be dramatically simplified into what’s called a “thin client” that does little more than display the results of what the cloud-based datacenters generate?

As logical as that may at first sound, history has shown that it’s never quite that simple. Certainly, in some environments and for some applications, that model has a great deal of promise. Just as we continue to see some companies use thin clients in place of PCs for things like call centers, remote workers and other similar environments, so too, will we see certain applications where the need for local computing horsepower is very low.

In fact, smart speakers like the Amazon Echo and Google Home are modern-day thin clients that do very little computing locally and depend almost completely on a speedy network connection to a cloud-based datacenter to do their work.

When you start to dig a bit deeper into how these devices work, however, you start to realize why the notion of powerful computing clients will not only continue to exist, but likely even expand in the era of Gigabit LTE, 5G and even faster WiFi networks. In the case of something like an Echo, there are several tasks that must be done locally before any requests are sent to the cloud. First, you have to signify that you want it to listen, and then the audio needs to go through a pre-processing “cleanup” that helps ensure a more accurate response to what you’ve said.

Over time, those local steps are likely to increase, placing more demands on the local device. For example, having the ability to recognize who is speaking (speaker dependence) is a critical capability that will likely occur on the device. In addition, the ability to perform certain tasks without needing to access a network (such as locally controlling devices within your home), will drive demand for more local computing capability, particularly for AI-type applications like the natural language processing used by these devices.

AI-based computing requirements across several different applications, in fact, are likely going to drive computing demands on client devices for some time to come. From autonomous or assisted driving features on cars, to digital personal assistants on smartphones and PCs, the future will be filled with AI-based features across all our devices. Right now, most of the attention around AI has been in the datacenter because of the enormous computing requirements that it entails. Eventually, though, the ability to run more AI-based algorithms locally, a process often called inferencing, will be essential. Even more demanding tasks to build those algorithms, often called deep learning or machine learning, will continue to run in the data center. The results of those efforts will lead to the creation of more advanced inferencing algorithms, which can then be sent down to the local device in a virtuous cycle of AI development.

Admittedly, it can get a bit complicated to think through all of this, but the bottom line is that a future driven by a combination of fast networks and powerful computing devices working together offers the potential for some amazing applications. Early tech pioneer John Gage of Sun Microsystems famously argued that the network is the computer, but it increasingly looks like the computer is really the network and the sum of its connected powerful parts.

Why Apple can’t Lose the Future Services Battle

I recently found myself in a conversation with some friends (thanks again to Dan M. (@OhMDee), @zcichy, and Eric (@mobile_reach) I made on Twitter (yes, you can make friends on Twitter). Our conversation was a frequently productive and sometimes frustrating back and forth on Apple’s privacy position and what risks it may have on their future competitiveness with services, namely AI/Siri.

While Apple is not going to be a pure play services company, there is no doubt services will play a much larger role in consumer experience in the coming years. It is reasonable to believe one’s ability to compete in features around machine learning and, eventually AI, will depend on the depth and quality of data acquired to train your networks and AI assistants. So let’s start by looking at Apple’s relationship with customer behavior data.

Is Apple Getting Enough Useful Data?
Apple’s relationship with customer data has always been clear. If you agree to share analytics/diagnostic information with Apple, you are opting-in to share some data with Apple. They are upfront about what this data is used for as they state very clearly they are collecting data on user behavior which will be used to help make products and services better in the future. Pointedly, a key difference here, as opposed to many other services, is even if you opt out of sharing data, you still get to use the full features of the service. With their advancement in tactics around privacy, including differential privacy, they are purposefully anonymizing that data so any information collected — things you say to Siri, what apps you install, what news you read in Apple News, etc. — can never be tied back to the individual. The technical term for this is Personally Identifiable Information. Apple’s goal is to make it so no information collected can ever be tied to personally identifiable information. While no one will dispute Apple’s attempts to go above and beyond to protect our privacy is admirable, there are a few concerning points I’d like to call out.

First, we have acknowledged Apple is using information about us to make their products and services better. But we simply have no idea how much information is being collected or analyzed. The rub is Apple’s services are progressing (or, at least, feel like it quite often) at a rate much slower than other companies who do collect and analyze more customer behavior data like Google, Facebook, and Amazon. There is no doubt Siri still has advantages in global language support and integration across all of Apple devices while the competition still has limitations. While Siri is certainly competitive with Google Assistant and Amazon’s Alexa (none of them are perfect yet or without faults) you have to admit both are pretty advanced and comparable to Siri in many ways. Both Google’s Assistant and Amazon’s Alexa have been on the market less than a year while Siri has been on the market in five years. Despite technical advancements in machine learning and natural language processing during that four year gap that benefited Amazon and Google, there is no doubt in my mind their massive data sets on behavior was useful in feeding their backend engine to reach near parity with Siri from a machine intelligence standpoint.

Look at my brief time on Android using Google Now compared to Apple’s Proactive and now Siri apps. Both are supposed to be learning about me and making intelligent and contextual recommendations that sometimes work but more often than not, don’t. I’ve been on iOS since 2007 yet, a few months on Android yielded better contextual and relevant recommendations on a more consistent basis than both Proactive and Siri. This observation leads me to believe competing services are learning and getting better, faster by using more of my behavior data to analyze than Apple. The only thing I can think of is it’s because of Apple’s desire to have a hands off approach to my data.

All of the above points lead me to my final observation. I believe it is essential that Apple is competitive with services like Siri, and many others, against those whose business models depend on more on data collection than Apple’s. While I don’t believe Google and Facebook are the bad actors Apple portrays them as (and neither do consumers via evidence from our surveys), the bottom line is their business model, the financial lifeblood of their company, depends on their ability to sell advertising with the data they collect on customers using their service. Where Apple’s business model does not depend on using customer data collection to sell advertising, it is necessary for their model to make products and services that delight their customers. Within this viewpoint, Apple is already a trusted entity with our privacy since their business model does not necessitate mining that personal information. Based on some recent research we did, Apple customers overwhelmingly listed Apple as the top company they trusted with their privacy over companies like Microsoft, Google, Amazon, Samsung, Facebook, etc.

However, getting useful and good behavioral data is essential for Apple to make better products and services and, more importantly, compete with those services down the road. I’d almost prefer that, instead of Apple’s stance being not only to collect as little data as necessary and also to universally anonymize that data, they would simply say, “Trust us with your data. We will keep it safe and secure and we will deliver you superior products and services because of it.” I could also be satisfied with a hybrid approach where, for the most security conscious customers, Apple gives them the option to keep the existing privacy protocol as well as their differential privacy techniques, but also allow others to opt-in to giving them more data so that things like Siri, News, Apple Music, etc., benefit from that data and thus, deliver those customers a much more personalized and useful service. With some of the recent changes in iOS 10.3, I feel they are getting closer to exactly this scenario.

My genuine concern with Apple solely relying on an “above and beyond” approach to consumer privacy is we don’t know yet if this process will work and the existing evidence causes a great amount of speculation. My concern is they are mortgaging their future competitiveness around things like AI and better services holistically with this stance. Thus I view it as somewhat risky even if it seems like the right thing to do. If their approach does not work and their services truly not compete, some of their customers may use solutions from competitors whose business models open the door for them to be irresponsible with our data. If that happens, the customers lose because Apple — and I include Microsoft in this statement — have the least motivation to be irresponsible with our privacy. Their business models do not depend on directly monetizing that data. Say Google becomes the AI agent of the future and, all of a sudden, they fall on hard times and the only way to right the ship is to compromise or alter their privacy stance to keep making money. While it is only a hypothetical, it is still a valid concern if a free service monetized by ads becomes the majority services monopoly in the future.

I truly hope Apple is continuing to take a hard look at how their services compete in the market against comparable ones. Should there need to be some pivots on how data is collected and used to compete, I think the market would be OK with that. They are no doubt doing the right thing for their customers but, if going above and beyond with differential privacy yields non-competitive and thus less relevant services, then it will be all for nothing if their services aren’t used and can’t protect their customers.

Podcast: Tech Earnings for Alphabet, Microsoft and Intel

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell discuss earnings reports from Alphabet (Google), Microsoft, Intel and what they say about the future of tech industry.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Adventures in Machine Intelligence

(Tech.Pinions: Today’s Daily piece, “Adventures in Machine Intelligence” was an Insider post we originally published on December 12th, 2016. We post it today as an example of the daily content for our Insider subscribers. You can subscribe, yearly or monthly, at the page found here)

While I tend to stay away from high-performance computing and data center analysis, I’ve taken up the effort to better understand the soup-to-nuts solutions being developed for machine learning, everything from technical details around chipset architectures, software, network modeling, etc. Luckily, a large number of our clients have assets in these areas so engaging in discussions to help me better understand the dynamics has been straightforward. I’m not going to proclaim to be an expert but my technical background, as well as staying current in semiconductor industry analysis, is proving to be quite helpful. I’d like to share a few basics I find quite interesting.

A great deal of the work up to this point has been around data collection. Large amounts of data on specific subject matter, or around specific categories of data, is the key for machine learning. Simply put, data is the building block of machine learning and intelligence. Interestingly, and somewhat contrary to some opinions, it is not the person or company with the most data who is best positioned but those with the right data. A key part of this analysis about where we go in machine intelligence and how that translates to smart computers (AI) needs to be grounded in collecting the right data. So fundamentally, the starting point is data and the right data.

Lots of companies have been gathering data. Google has been gathering data from searches, world mapping and more. Microsoft has been gathering enterprise data, and Facebook gathers social data. There are a lot of companies gathering data but many are still in the early stages of making their backend data collection efforts into smart machines. In fact, very little of the technology we use is smart. By smart I mean something that is truly predictive and can anticipate human needs. We have a tremendously long way to go in making our machines truly smart. In a recent conversation with some semiconductor architects of machine learning silicon, I asked them if we can estimate a point in time in the history of the personal computer and liken it to where we are today in machine learning. Their answer? No later than the early IBM PCs. This was from folks who have been in the silicon industry for a very long time. The context for this discussion was around how much silicon needs to still advance for machine intelligence and AI to truly start to mature. So it is worth noting their comments on the early IBM days would include their knowledge that the early IBMs ran an Intel 8086’s with 4,500 transistors. Today, we have architectures that have more then 10 billion transistors.

After being convinced we still have a tremendous amount of innovation in semiconductors to get where we need to be in machine learning and AI, I started looking into what is happening today. The next step is to understand how to train a network or how to teach a computer to be smart(er). I stated above it all starts with data, good data, and the right data. Some of the most common examples of network training today are around computer vision. We are teaching computers to identify all kinds of things by throwing terabytes of data at them and teaching them a dog is a dog, a cat is a cat, a car is a car, etc. Training a network is not entirely arbitrary. It is calculated and intentional. The reason is network models have to be built/programmed before they can be trained. Leaning on decades of work on machine learning, many programs exist to train a network in some more common fields that work with large data. Medicine, autonomous vehicles, agriculture, astrophysics, oil and gas, and several others are areas where people have been focused creating this network model. Many hours of hard work and hard science go into the training of these network models so data can be collected and given to the machine so it can learn. Companies playing in this field today are picking their battles in areas where big money is to be made with these training models.

What is fascinating is how long it takes to train a network. With a modern day CPU/GPU and machine learning software, a network can be trained in as little as a few hours depending on the data set. To train a network what a dog is, with roughly two terabytes of data, could take 3-4 hours. However, there are many cases where the data sets are so large it could take several weeks to a month to train a computer just on one single thing. This again underscores the point of how far we still have to go in silicon. I’m reminded of early demonstrations of Microsoft Office running on Pentium chipsets where the demo shined because Excel could process a massive spreadsheet in 30 minutes or less. Today, it is nearly instantaneous. Someday, training a network will be nearly instant as will its ability to query that data and yield insight or a response. Both instant and in real time is the holy grail but we are many years away.

Knowing how early in this stage we are makes it hard to count any company out at this point. But it does emphasize how the right data being collected is key. Companies are right now setting the stage by getting the right data they need to carve out value in the future with AI. What is fascinating is how deep learning algorithms are helping networks learn faster with less data. Expert consensus affirms that having the largest data sets is not necessarily the guarantee of who wins in the future. Because specific network models have to be built, it emphasizes the collection of the “right data” philosophy.

What this means is companies and services can benefit from months or years of the right kind of specific data and still train a network model. Even companies who are starting today and just starting to gather data have a chance in this future leveraging machine intelligence for themselves and their customers — if the data is good.

With some context of where we need to go, silicon architectures — CPU, GPU, FPGA, custom ASICs, as well as memory — are all key to advancing technology in the data center for more efficient and capable backend systems for machine intelligence. But all are still governed by science and we have a relatively good idea what is possible and when. Which is why we know it will still be many, many years and, hopefully, a few new breakthroughs before we get even close to where we need to be for our intelligent computer future.

How Net Neutrality will Fare under Trump

With news this week about the nomination of Ajit Pai as the next chair of the FCC, much of the attention has focused on his stance on net neutrality and the likelihood the existing rules on net neutrality will be dismantled. However, net neutrality is a complex topic; even the definition of net neutrality is subject to widely differing interpretations. It’s worth breaking down exactly what’s likely to change and what isn’t at a Pai FCC.

Defining Net Neutrality

The first challenge here is defining net neutrality. A very general definition would be it refers to treating all internet traffic equally. It sometimes seems as if some people really do believe net neutrality can only merit the name if it’s really that broad. But that would preclude any sort of network prioritization which puts time sensitive packets above non-time sensitive packets and would also preclude any sort of prioritization by user or content at times of congestion on the network. Most reasonable people seem to at least leave some leeway for sensible network management in order to improve the performance of services subject to delays, such as voice and video calling or live video streaming.

Beyond that, the consensus breaks down very quickly. There are some who insist net neutrality has to bar any and all prioritization or differential charging by content or by user on any basis, whether or not it’s transparent, available to all, or paid for. The best example is the programs introduced over the past couple of years by major wireless carriers, under which some or all content in a particular category is carried without counting against the user’s data plan. T-Mobile and AT&T are the most high profile examples. T-Mobile has two programs – BingeOn and Music Freedom – which “zero rate” video and music content respectively. These programs are essentially open to all comers from a content perspective and there’s no charge to participate. AT&T has recently exempted video from its subsidiary DirecTV from its data caps and says this reflects an internal transfer from the DirecTV division to the AT&T Mobility division in an arrangement also open to any other video provider under the company’s Sponsored Data program.

How you define net neutrality will determine how you see each of these programs. Strict advocates reject both T-Mobile’s programs and AT&T’s, while some others find T-Mobile’s program acceptable but not AT&T’s or Verizon’s. The FCC has never taken a final position on either program but did begin looking into AT&T’s towards the end of last year. The net neutrality rules as presently constituted, however, don’t explicitly bar zero rating programs. Whether you consider either or both of these programs in breach of the principles of net neutrality will determine to what extent you think the new FCC regime will dismantle net neutrality, as I’ll show in a moment.

Rules vs. Motivated Behavior

It’s also worth noting that net neutrality rules were contemplated for many years but only implemented successfully recently. In the time before the rules were finally introduced, there were very few violations of its principles regardless – literally less than a handful of prominent cases existed during that time. The reason is, broadband providers are highly motivated to stay away from controversial breaches of net neutrality principles because they know that such actions would be extremely unpopular with consumers and would invite additional regulatory scrutiny. If we’re talking about actively blocking or degrading any form of content simply because it competes with the carrier’s own content (rather than because it is illegal or against the carrier’s terms of service), that remains very unlikely because there would be an outcry and a backlash and, ultimately, severe financial consequences in terms of lost business if the situation continued.

Net neutrality rules as presently constituted largely lock this behavior in place but the carriers have always made clear they object to the rules largely because they represent additional regulatory encroachments on their freedom to operate rather than because they contemplate any particular action that would contravene them. However, it is clear carriers have other, softer, forms of prioritization and differential treatment in mind, as we’ve seen from the zero rating plans I mentioned earlier. Regulation might or might not bar those zero rating programs but it’s relatively unlikely carriers would ever stoop to systematically blocking or degrading traffic from competing services even in the absence of regulation. Carriers have mostly been willing only to engage in behavior seen as either neutral or beneficial by users. They have much less concern about how they’re perceived by content providers. As such, they’ll zero rate some or all video content because users respond positively to that, regardless of whether providers of other content services like the idea or not. Net neutrality regulation, therefore, mostly helps protect content providers rather than end users, at least in the short term.

Dismantling Net Neutrality

With all that as context, let’s look at what might actually happen in the real world if net neutrality regulations as currently constituted were eliminated. Here are my predictions for what we’d actually see as a result:

  • Carriers wouldn’t suddenly (or even eventually) start engaging in discriminatory prioritization or blocking of traffic based on the source – as I’ve said, users would respond badly and even an FCC largely opposed to additional regulation would have to step in and act if this became widespread
  • Carriers likely would continue to pursue zero rating programs as a way to both differentiate from competitors and to make their own content services more attractive in some cases (as AT&T and Verizon have already done). With the growth of unlimited data services among the major wireless carriers, this actually wouldn’t have a massive effect on the market
  • Some broadband providers are actually bound by terms of merger approvals to abide by fairly strict net neutrality principles regardless of general regulations, with Comcast the prime example through 2018. As such, these companies would have to continue to abide by the rules whether or not they’re overturned, at least for the duration of the commitments they’ve made. AT&T might well be subject to some similar rules as a condition of the approval of its acquisition of Time Warner, putting the two largest broadband providers and the second largest wireless provider under net neutrality restraints

In short, we’re unlikely to see an apocalyptic end to the internet as we know it, even if the FCC begins taking apart the present net neutrality regulations. We will likely see more zero rating and similar programs which don’t prioritize or degrade traffic but merely apply different data pricing to it. If you object to that kind of thing as a breach of net neutrality, you’re likely to be upset but most consumers will be either blissfully unaware or happy about it. If you’re a content provider, you may feel hard done by here too but, again, under the increasingly prevalent unlimited wireless data plans, this will become less of a disadvantage over time. I, for one, am less pessimistic about the outcomes here than many of those decrying the changes on the horizon.

Will Wireless ever Replace Broadband?

It’s 2020. 5G wireless is being rolled out, with speeds exceeding 10 Gbps, latency below 1 millisecond, and the ability to accommodate vastly more traffic and connections. Will the average household, which doles out some $400 monthly for mobile, pay TV, and broadband today, be able to go wireless only?

The thought is enticing. This could be Cut the Cord 3.0. Version 1.0 cut the landline phone in favor of wireless only, which nearly 50% of households have already done. Version 2.0 has been about cutting the cable (pay TV) cord and going internet only for TV content. It’s happening slowly but steadily. With the 4G LTE and ultimately 5G roadmap promising speeds and latency as good as or exceeding today’s fixed broadband, might customers ultimately look to cut the wire that delivers fixed broadband?

This possibility might get a serious test drive in 2017. Verizon Wireless, which has developed some of its own “pre-5G” specifications, plans to test fixed wireless access in as many as ten cities this year. This would involve getting fiber or other ‘broadband’ infrastructure to within a couple of hundred meters of a household and then use wireless for the proverbial “last mile”. This approach has a lot of appeal to a company like Verizon. First, it allows them to bring broadband service to households without building fiber to every dwelling, the cost of which has slowed the rollout of FTTH services from FiOS to Google Fiber. Second, it allows Verizon to potentially offer a competitive broadband service outside its ‘landline’ footprint. This is compelling because, while the mobile market is very competitive with four national providers, broadband is a near monopoly in many U.S. markets and average speeds are decidedly middle of the pack when compared to other developed economies.

In addition to Verizon, a company called Starry, founded by the folks who did Aereo, has raised $60 million to build out a competitive broadband service using wireless. They’re currently testing in Boston using 28 GHz spectrum, which is one of four bands the FCC has allocated for 5G services. As another example, Redzone Wireless, a Wireless Internet Service Provider (WISP), is offering a “5GX” service to some areas in Maine, using a combination of LTE Advanced capabilities and unlicensed (Wi-Fi) spectrum, promising households average download speeds of 50 Mbps, for $80 per month.

This certainly bears watching. What are the barriers to Cutting the Cord 3.0? The most significant relates to the intertwined challenges of wireless capacity and economics. A typical wireless user consumes about 4 GB per month at an average retail price of roughly $10 per GB, all things considered. Let’s call that 10 GB for a 2.5 person ‘household’. Now, a typical Netflix-watching broadband household consumes some 250 GB per month. That’s a big delta between fixed and mobile consumption.

Mobile operators are acquiring more spectrum and investing in 5G in order to accommodate ~8-10x more traffic, circa 2020. But we should also consider there will be growth in fixed broadband usage too, with 4K, virtual reality, and so on coming down the pike. It’s not inconceivable that a broadband household could approach 1 terabyte (TB) of average monthly use within the next 3-5 years. That would be a tough number for wireless to digest, even with the technology being developed for 5G. Plus, that’s an awful lot of capacity the mobile operators would have to somehow build (or lease) within a few hundred meters of a home or building.

Wireless economics is a related challenge. It costs a wireless operator like Verizon $1-2 to deliver a GB of traffic to a consumer. That’s why everybody’s ‘unlimited’ plan comes with an asterisk, usually kicking in when usage exceeds 25 GB or so. One would have to take a pretty big whack at the costs of delivering those wireless GBs in order to get into the broadband neighborhood. Even if we chop the low-end of today’s $ per GB delivered by half, the math remains challenging.

A final issue many bring up is how do you get the equipment, which might look like a small dish or box, into a building or household? This is something folks have heretofore been hesitant to muck around with. I see this as the least of the barriers. After all, over the past few years, people have become more comfortable installing Nest thermostats, fancier Wi-Fi routers, mesh networks, and femtocells in their home. The equipment piece will sort of look like that by the time these services are ready for prime time.

I think there’s potential here but it might initially be focused on particular market segments. The lowest hanging fruit would be in rural areas, where mobile/internet coverage is currently lacking or sub-par. If we can get enough capacity close to a building, fixed wireless access might be a great solution. Another segment might be the burgeoning multiple dwelling unit (MDU) market, which has been a challenge for broadband anyway. This seems to be one of the target markets for companies like Starry. A third and potentially vital market segment might be households that are budget-challenged. $400 per month for today’s cocktail of mobile, pay TV, and broadband is a tough financial nugget for a lot of households. There’s definitely room for the “Metro PCS” or “Cricket Wireless” of broadband, offering a compelling plan, combining fixed and mobile, with some limits on broadband speeds and consumption, particularly during peak times (to address the ‘Netflix’ problem). Lord knows we could use some competition and pricing options in the broadband world.

One final consideration is how the industry structure might evolve to deliver on this vision. Comcast is aiming to get into wireless, through both an MVNO and apparently having also spent several billion dollars in the recent 600 MHz spectrum auctions. The wireless operators will be reliant on cable and other providers for small cell sites, capacity, and backhaul, in order to build 5G. Then there’s DISH which owns a treasure trove of spectrum that will have to be put to work at some point during our lifetime. And all the big internet players, from Netflix to Amazon to Facebook and Google, are playing in the 5G sandbox in some capacity thinking that, at some point and to some extent, they will need to be at least partial masters of their own network domain. And there’s been a very active M&A market in the fiber biz of late.

So while we are not yet forecasting a significant shift toward Cutting the Cord 3.0, this is the year discussions on the issue will get a serious start.

Voice Drives New Software Paradigm

A great deal has been written recently on the growing importance of voice-driven computing devices, such as Amazon’s Echo, Google Home and others like it. At the same time, there’s been a long-held belief by many in the industry that software innovations will be the key drivers in moving the tech industry forward (“software eats the world”—as VC Marc Andreesen famously touted over 5 years ago).

The combination of these two—software for voice-based computing—would, therefore, seem to be at the very apex of tech industry developments. Indeed, there are many companies now doing cutting-edge work to create new types of software for these very different kinds of computing devices.

The problem is, expectations for this kind of software seems to be quickly surpassing reality. Just this week, in fact, there were several intriguing stories related to a new study which found that usage and retention rates were very low for add-on Alexa “skills”, and similar voice-based apps for the Google Assistant platform running inside Google Home.

Essentially, the takeaway from the study was that outside of the core functionality of what was included in the device, very few new add-on apps showed much potential. The implication, of course, is that maybe voice-based computing isn’t such a great opportunity after all.

While it’s easy to see how people could come to that conclusion, I believe it’s based on an incorrect way of looking at the results and thinking about the potential for these devices. The real problem is that people are trying to apply the business model and perspective of writing apps for mobile phones to these new kinds of devices. In this new world of voice-driven computing, that model will not work.

Of course, it’s common for people to apply old rules to new situations; that’s the easy way to do it. Remember, there was a time in the early days of smartphones when people didn’t really grasp the idea of mobile apps, because they were used to the large, monolithic applications that were found on PCs. Running big applications on tiny screens with what, at the time, were very underpowered mobile CPUs, didn’t make much sense.

In a conceptually similar way, we need to realize that smart speakers and other voice-driven computing devices are not just smartphones without a screen—they are very different animals with very different types of software requirements. Not all of these requirements are entirely clear yet—that’s the fun of trying to figure out what a new type of computing paradigm brings with it—but it shouldn’t be surprising to anyone that people aren’t going to proactively seek out software add-ons that don’t offer incredibly obvious value.

Plus, without the benefit of a screen, people can’t remember too wide a range of keywords to “trigger” these applications. Common sense suggests that the total possible number of “skills” that can be added to a device is going to be extremely limited. Finally, and probably most importantly, the whole idea of adding applications to a voice-based personal assistant is a difficult thing for many people to grasp. After all, the whole concept of an intelligent assistant is that you should be able to converse with it and it should understand what you request. The concept of “filling in holes” in its understanding (or even personality!) is going to be a tough one to overcome. People want a voice-based interaction to be natural and to work. Period. The company that can best succeed on that front will have a clear advantage.

Despite these concerns, that doesn’t mean the opportunity for voice-based computing devices will be small, but it probably does mean there won’t be a very large “skills” economy. Most of the capability is going to have to be delivered by the core device provider and most of the opportunity for revenue-generating services will likely come from the same company. In other words, owning the platform is going to be much more important for these devices than it was for smartphones, and companies need to think (and plan) accordingly.[pullquote]Existing business models and existing means for understanding the role that technologies play don’t always transfer to new environments, and new rules for voice-based computing still need to be developed.”[/pullquote]

That doesn’t mean there isn’t any opportunity for add-ons, however. Key services like music streaming, on-demand economy requests, and voice-based usage or configuration of key smart home hardware add-ons, for example, all seem like clearly valuable and viable capabilities that customers will be willing to add on to their devices. In each of those cases, it’s also important to realize that the software isn’t likely going to represent a revenue opportunity of its own; simply a means of accessing an existing service or piece of hardware.

New types of computing models take years to really enter the mainstream, and we’re arguably still in the early innings when it comes to voice-driven interfaces. But, it’s important to realize that existing business models and existing means for understanding the role that technologies play don’t always transfer to new environments, and new rules for voice-based computing still need to be developed.

Samsung’s Galaxy Note7 Investigation becomes the Cornerstone for Improved QA

Samsung just hosted a press conference in Korea to share the findings of an investigation into what caused several Galaxy Note7 smartphones to catch fire. You can find all the details of the findings here but, in summary, there were two distinct battery issues, from two different manufacturers, that lead to the positive and negative electrodes to touch.

Getting to the root cause of the issue was paramount but what we learned from this process has ramifications, not only for Samsung, but for the industry because Lithium-ion batteries are not going away anytime soon. The actual investigation process Samsung went through over these past few months would have been quite difficult for a manufacturer without Samsung’s scale, capital, R&D facilities and work force. Dedicating 700 researchers to evaluate 200,000 smartphones and 30,000 batteries in a newly built testing facility is dedication.

Of course, a lot was on the line here for the world’s leading smartphone maker. Trust of both users and employees was at risk and winning that trust back was paramount.

Winning Back Trust that Samsung will Continue to Innovate

In early October, we at Creative Strategies conducted a study to assess the US smartphone market. Among the areas we wanted to evaluate was the impact, if any, the Galaxy Note7 incident had on the brand’s smartphone market. We were bullish then, and we are bullish now, that Samsung will recover from the Note7 recall. Only 28% of US Android owners said the Note7 caused them to have a more negative opinion of the Samsung brand. Numbers were even lower among Samsung owners.

Consumers are generally quite forgiving and have a relatively short memory. The car industry has seen several recalls over the years, yet consumers continue to buy. The mobile industry has also seen recalls but nothing to the extent of the Note7. Of course, what made the Note7 such a test case is how passionate its users are and how unwilling they were to give up their units, pushing Samsung and carriers the extra mile to get the phones back.

Samsung was quick to take responsibility and step into action. Communication is where the smartphone leader could have done with more clarity. Whether due to cultural differences in communication styles or due to having the complexity of bringing together the Consumer Product Safety Commission, carriers and retailers, Samsung’s messaging was not as direct as it could have been. Digital messages, however, were pretty clear, from warnings being displayed every time the phone was charged to limiting the charging capacity of the phone to ultimately bricking the phone.

Samsung, like any vendor in every sector that has ever had a recall, cannot promise its products will never again suffer from a malfunction. What can be done, however, it to show the necessary steps have been taken to limit the chance of that happening again.

What is even more important when we are talking about a market leader, especially one that has gained that position by adopting new technologies early, is to show their innovation streak will not be limited by fear. Samsung must show consumers they have set in place checks and balances that will allow them to continue to bring new technology, new designs, and new features into the market in a safe and effective way. The new 8-point battery safety check Samsung will implement going forward is an important step in recognizing that innovation should also come to QA, testing, safety and manufacturing processes.

A Market Leader Acting like a Leader

The fact that made the Note7 recall also unusual is the cause of the issue involved several parties: Samsung and two battery suppliers. While we do not know the names of the suppliers, it would be safe to believe they are not exclusive Samsung suppliers. The use of Lithium-ion batteries is also not limited to Samsung or these two suppliers.

Samsung’s President of Mobile Communications Business, Mr. DJ Koh, stated during the press conference that, during in the investigation, the researchers filed several patents in battery technology, patents that will be shared with the industry. We would need more details to understand the significance of these patents but this is the kind of action we would expect from a market leader, especially one that has a pretty substantial battery business.

Despite the many stories that broke on Friday about Samsung putting the blame on its suppliers, I did not hear that in the press conference. Although I am confident Samsung will require changes in the QA process implemented by its supplier, the focus of the messaging was centered on the changes Samsung will implement going forward, including the appointment of a battery advisory group. As much as there is skepticism around how two different suppliers could have two independent battery issues, I do not believe Samsung cut corners in bringing the Note7 to market. As the industry pushes more designs and features and as users push the capabilities of these devices, making sure all that can be done in a safe manner is paramount.

Innovation needs to involve all aspects of the production process and Samsung is making this point very clear. While adding steps to the process adds costs and time, I expect Samsung to be able to integrate the new steps without adding considerable development time or costs on to new products.

What is Next?

I had initially thought Samsung should move on from the Note franchise and deliver a different product with similar capabilities. After months on hearing countless airport announcements referring to the banned phone as the “Galaxy Note7”, “a Samsung phone”, “the Galaxy phone” and anything in between, I no longer think the Note8 would suffer as much as I initially thought. Better put, anything that will come after the Note7 will equally suffer whether it is related to it or not.

Samsung apologized, provided answers and solutions. What remains to be done is to make sure users who returned their Note7 receive the phone they want and a little extra love from Samsung. If indeed there will be a Note8 on the market in 2017, there is a lot Samsung can do to butter up those users from incentives on upgrades to limited editions to early access, etc.

While I can already read the headlines referring to the next Galaxy phone as “the one that hopefully will not blow up” or “not as hot as the Note7”, I am hoping we will move on — like most consumers will.

Moving Toward Our Augmented Future

This week, I attended the first-ever AR in Action conference at the MIT Media Lab in Cambridge, Massachusetts, where an extensive list of current (and likely future) tech luminaries talked about the past, present, and future of Augmented Reality. There are plenty of skeptics who doubt the viability of AR and Hollywood-produced visions of the technology set an awfully high bar. I’ve long felt AR will become a crucial technology; after spending time with this group, I’m even more convinced of this. It is not a matter of if, but when.

John Werner, known for the TEDxBeaconStreet events in Boston, orchestrated AR in Action so that the talks, panels, and demonstrations were all short and highly targeted. As a result, in the span of two days, I saw more than a dozen current and emerging use cases for AR, from both the academic and corporate worlds. There was much discussion about the potential ramifications of AR across numerous industries and there were many technology demonstrations. Finally, I had the opportunity to test out the segment’s hottest new hardware: the Meta 2 (it did not disappoint).

Frankly, the volume of information I absorbed will take weeks to process, but a few key takeaways follow.

Plenty of Companies are Already Testing AR
Last year I wrote about an IDC survey that showed US IT decision makers were already looking at AR for their business. One of the key platforms for commercial AR is Vuforia, which PTC acquired from Qualcomm in late 2015. During the conference, PTC’s CEO James Heppelmann talked about the intersection of the Internet of Things and AR and noted that PTC now has thousands of customers in active pilots of AR technology, primarily on smartphones and tablets. PTC also says more than 250,000 developers are using Vuforia. During day two, PTC’s Mike Campbell showed how to create a working piece of AR software—tied to real world object (in this case, a coffee maker)—in the span of about 15 minutes.

A Few Organizations are Moving Beyond Pilots
Patrick Ryan, the engineering manager at Newport News Shipbuilding, discussed the rollout of AR at his 130-year old company. At present, the firm has completed over 60 AR industrial projects and it is currently working on the rollout of permanent shipbuilding process changes using AR. On stage, Ryan showed a video and talked about using AR to facilitate such seemingly mundane tasks as painting a ship. Mind you, we’re talking about painting aircraft carriers. NNS workers are using AR on connected tablets to visualize parts of the ship before they’re completely built to eliminate errors and decrease waste during the process.

During the discussion on AR in museums, the panel—Giovanni Landi, Rus Gant, and Toshi Hoo—noted museums have been using augmented reality, in the form of audio guided tours, for decades. Many museums have begun experimenting with head-mounted AR to bring staid museum exhibits to life for visitors. Panel members noted that, as the prevalence of mobile AR increases, with more visitors walking in the door with capable devices, the opportunities to utilize the technology will also increase. One of the key ways they expect to use AR in the future is through the digitization of rare objects, which will allow museums to “show” a far larger number of items than can be physically displayed to the public.

AR is More Than Just Visual
Numerous speakers talked about the fact there are more ways to augment reality than through visual systems, including auditory and touch. Intel’s Christopher Croteau, GM of Head Worn Products, talked about his company’s product collaboration with Oakley. The Radar Pace is a set of glasses but there is no screen—all interactions occur through voice commands and audio feedback. The glasses, introduced in October, provide real-time feedback for runners and cyclists without visually distracting them. In addition to a spirited talk on the potential of AR technologies, Croteau also presented Intel’s forecast of the market stretching all the way to 2031. Like most forecasters, Intel sees the near-term opportunity for AR in the enterprise. But, by 2027, it predicts consumer shipments will move ahead of commercial and, four years later, the former will out ship the latter by a 4 to 1 margin.

The Right Interface is Crucial
There was a great deal of discussion about the challenges (and folly) of bringing legacy interaction models to AR but not a lot of consensus on what the right approach should be. One thing is clear: hand tracking and voice technology are both likely to play crucial roles but both have a long way to go before they’re ready for mainstream users. The panel on haptics was also enlightening, with executives from firms such as Ultrahaptics and Tactai discussing the critical role they expect touch to play as AR evolves.

More to Come, Exciting Times
The downside to an event like AR in Action is a person can only attend one track at a time (there were three running concurrently on both days). The upside is event organizers recorded everything, which means, hopefully in the near future, I will get a chance to watch all the tracks I couldn’t attend in person. Just as important, Werner made it clear this was just the first of what he expects to be many meetups of this kind, which I think is a good sign for this nascent but incredibly important market.

Two Possible Futures for Amazon’s Alexa

Amazon’s Alexa voice assistant was clearly the star of CES this year. No single consumer electronics device dominated coverage but lots of individual devices incorporated Alexa as their voice assistant of choice. The announcements ranged from Echo clones to home robots to cars and smartphones. It was clear Amazon had entirely captured the market for voice platforms. Only one or two integrations of the Google Assistant were announced and those are both future rather than present integrations.

It would be easy off the back of all this to say Amazon had won the voice assistant battle once and for all but I actually see two possible futures for Alexa, with very different outcomes for Amazon and its many partners.

Future 1: Amazon continues to dominate

The first possible future for Alexa is one where the current trends mostly continue and even accelerate. Amazon’s own Alexa-based products continue to sell well, with Dot probably taking a greater share of sales going forward relative to Echo (or Tap), selling into the tens of millions of installed units in the next couple of years. On top of that, the adoption by third parties that was so evident at CES continues, with even more devices offering integration. Importantly, Alexa starts to make an appearance in Android smartphones, making it as pervasive and ubiquitous as existing smartphone-based assistants, possibly even making an appearance in another round of Amazon smartphones.

What we end up with in this scenario is a massive ecosystem of devices which all offer users access to Alexa and its functionality. These devices perform their functions well, recognizing voice commands effectively, responding appropriately, and adding value to users’ lives. Because they’re all part of the same ecosystem, they work the same way — commands issued through one are reflected on the others. Amazon benefits from owning a massive new user interface and platform which can be used not just to push its e-commerce sales but to take an increasingly large share of media and content consumption across video, music, audiobooks, and more.

This scenario also assumes major competitors either don’t launch competing products or those products fail to take off. Google has, of course, already launched its Home device but, thus far, sales are far lower than Amazon’s and are handicapped by a lack of awareness and the lack of a major e-commerce channel. The Google Assistant, meanwhile, should be the default option for Android OEMs in all their devices but the way in which Google has held it back as it promotes its own hardware has also held it back, perhaps fatally, as a third party voice platform. If that doesn’t change, if Microsoft’s Windows-based Cortana strategy falls short, and Apple’s reticence to participate in this market continues, Amazon dominates with its devices and those third party products using its ecosystem.

Future 2: Cracks start to appear in the Alexa ecosystem

The secrets of Echo’s success

I want, though, to paint an alternative future for Alexa, one which is less rosy and more complex. Amazon’s genius in launching the Echo and Alexa was to pick a blank slate rather than an existing category for its experiments with voice. Instead of competing with another smartphone-based voice assistant, Amazon chose to compete in the home, with its relative quiet and better internet connectivity, and a device that was optimized for specific use cases: fantastic voice recognition and great audio output, even from across the room. That had two major advantages: first, it wasn’t going head to head with powerful entrenched competitors and second, it could deliver far better performance around voice recognition than smartphone-based systems.

The Echo performs fantastically well at what it does. Its voice recognition is indeed very good, inviting highly favorable comparisons to Siri and the like. It’s this success in providing great voice experiences that have propelled sales of its own devices and prompted other companies to build their own as well. The assumption on all sides is it’s Alexa that powers these phenomenal experiences and the Alexa Voice Service for device makers will power similar experiences on other devices.

Amazon’s limited control over Alexa devices

However, one look at Amazon’s guidelines for those wishing to incorporate Alexa Voice Service into their devices should prompt at least some skepticism. Echo and Echo Dot famously have a 7-mic array built into the top of the device, with beam forming, enhanced noise cancelation, and more helping to ensure the device does a phenomenal job of picking up your voice from up to 20 feet away. But look at the minimum specs for Alexa-powered devices and you quickly realize many of these devices won’t match up on hardware – the minimum standard for microphones is just one and additional technologies like noise reduction, acoustic echo cancellation, and beam forming are entirely optional.

Also optional is “wake word” support – in other words, the always-listening function that waits to hear Alexa (or another word of the user’s choice) and then springs into life. The Amazon Tap doesn’t offer this feature (and was hammered in reviews for it) because the “across the room” use case is a key part of Echo’s appeal. Even when a wake word is supported, Amazon only requires a minimum of one microphone for near-field recognition and just two for far-field (20 foot) recognition.

Where this second future scenario diverges from the first is the sheer range of Alexa-enabled hardware starts to put many devices into the market that don’t have nearly the appeal of Amazon’s own. Lenovo’s Echo clones appear to be using an 8-mic array and may very well perform at exactly the same level or better but the Huawei Mate 9 smartphone, which is due to incorporate Alexa later this year, has just 4 microphones and the device obviously wasn’t built with optimal voice recognition in mind. In a rush to get products to market, we’ll see many vendors putting out devices with the bare minimum specs and prominent Alexa-related branding.

All it will take at this point is a handful of terrible reviews for Alexa-powered speakers and other devices and the Alexa brand will quickly become tarnished. At that point, Amazon’s admirable openness with the Alexa tools may come to be seen as a huge mistake because it’s set so few limits on what can be done with the service and its brand. Even if third parties are committed to providing the best possible experience, voice recognition on a smartphone or other smaller devices likely is never going to match up to the Echo’s quality, which means the true Alexa experience will likely remain elusive outside of the home. Any perceived quality advantage will, therefore, fade as well, making Alexa a lot less appealing.

More compelling competitors

Meanwhile, competitors will move past their slow start in responding to the Echo and Alexa and will begin producing more compelling alternatives. I see no reason why competitors shouldn’t be able to build devices which perform at least as well, in terms of voice recognition, as Echo given the same parameters (home use, large devices, mic arrays designed for voice recognition). Indeed, Google’s Home has already demonstrated there’s no special magic there. In addition, players like Google and Apple have one huge advantage – they already own massive installed bases of hundreds of millions of devices running their operating systems and integrated voice assistants.

Google’s early misstep in limiting the Google Assistant to its own devices will be overcome in the next few months as it makes it available to Android OEMs more broadly and, at that point, its Home device will become a lot more compelling. Apple, too, has the potential to do really interesting things in the home speaker space should it choose to do so, given the increasing scope and availability of Siri and its AirPlay audio and video casting technology. Again, the appeal of using the same assistant everywhere, tightly integrated into devices, will be a big advantage over Amazon’s looser Alexa ecosystem.

Which future plays out?

On balance, I’m inclined to think the future will look rather more like the second scenario I’ve painted than the first. That is to say, I think Amazon’s advantages in the field of voice assistants are mostly temporary and, to some extent, illusory. Competitors will catch up fast in the home and exceed its capabilities outside it. That doesn’t mean Amazon can’t build a decent business with a more limited scope of opportunity around its first party devices and a handful of really compelling third party devices in an ecosystem but I suspect its future will be a lot less bright than its present in this space.

What “Hidden Figures” Can Teach Us about AI

This weekend, I finally watched Hidden Figures. I took my 9-year-old daughter with me to witness how instrumental women of color were to the success of several NASA missions — something that historically has been associated with white male achievement. If you have not seen it yet I highly recommend it. The acting is superb and the story offers so much education, both on race relations and women in the workplace. What I want to focus on is possibly something the director and the cast never imagined could matter. I do, not because it is the most important aspect but simply because it is very relevant to the tech transition we are experiencing right now.

All the talk surrounding artificial intelligence is as much about the technology itself as it is the impact its adoption will have on different aspects of our lives. Business models in the automotive industry, insurance business, public transportation, search and advertising as well as more personal consequences such as human to human interaction, sources of knowledge, and education. Change will not come overnight but we better be prepared because it will come.

New Tech Requires New Skills

Change came in 1962 for the segregated West Area Computer Division of Langley Research Center in Virginia where the three women who are the main protagonists of the story worked. Mathematician Katherine Goble and de facto supervisor Dorothy Vaughan are both directly affected by new tech rolling into the facility in the form of the IBM 7090. If you are not familiar with the IBM 7090 (I was not before this weekend), it was the third member of the IBM 700/7000 series of computers designed for large-scale scientific and technological applications. In layman terms, the 7090 would be able to perform in a blink of an eye all the calculations that took the computer division hours. Dorothy understood the threat and, armed with her wit and a book on programming languages, was able to help program the IBM 7090, taught her team to do the same, shifted their skills and saved their jobs.

I realize part of this story might be for the benefit of the screenplay and the world is much more complicated. However, I do think that what is at the core is very relevant — the creation of new skill sets.

Although AI has the potential to affect not only manual jobs that can be automated but also, theoretically, jobs that require learning and decision making, the immediate threat is certainly on the former.

We focus a lot, and rightly so, on the job loss AI will cause but we have not yet started to focus on teaching new skills so such losses can be limited. As I said, AI will not magically appear overnight but we would be fools to think we have plenty of time to create the skills our “augmented” world will require. From new programming languages to new branches of law and insurance, Q&A testing and more. Empowering people with new skills will be key not only to having a job but also keeping our income at pace with the higher cost these new worlds will entail. Providing a framework for education is a political responsibility as well as a corporate one.

Who Will We Trust?

The IBM 7090 replaces Katherine when it comes to checking calculations but, just as Friendship 7 is ready to launch, some discrepancies arise in the electronic calculations for the capsule’s recovery coordinates. Astronaut John Glenn asks the director of the Space Task Group to have Katherine recheck the numbers. When Katherine confirms the coordinates, Glenn thanks the director saying: “You know, you cannot trust something you cannot look in the eyes.”

I don’t know if Glenn actually said that or if it is a screenplay liberty but, when I heard it, I immediately thought of AI. Who will consumers trust? Many think AI is not going to be any different than it has been with any prior technology but I believe such thinking undermines where AI could actually take us. Autonomous cars are the scenario we most often refer to. We might trust the car to park itself or to alert us if a car is in our blind spot. We might even try a semi-autonomous setting on an empty motorway. But are we ready to trust the car and take out eyes off the road and our hands off the wheel? How will brands earn our trust? Will it be the number of accidents they are involved in? The assurance that, in case of an accident, their computers are programmed to save whoever is in the car?

What if we changed scenarios and talked about a medical diagnosis. Today, we tend to pick our doctors and specialists based on our insurance’s recommendation, a friend recommendation or even the comments on Yelp. Bed manners, courteous receptionists, short wait times all play a role. But, for anything more serious, what it all boils down to is the track record of right diagnosis and saving lives. Will we trust a machine alone? Or will we still want a doctor, who we can look in the eyes, coupled with the machine? A recent White House report mentioned by Fortune talks about the idea of linking human and machine. While they do so as part of the discussion of job losses, I think the formula also applies to our human nature of building trust with another human being.

The same issue of trust will also apply to other scenarios where not our life but our privacy and security could potentially be in danger. Here too, trust will matter. Who do we trust with our digital assistant, with our home automation? When life is not at risk, at least not directly, I feel consumers will show more flexibility, especially when the full implications are not grasped and convenience and possibly price are what matters the most.

In both cases, though, I strongly believe AI will drive consumers to consider more than technology alone and look for traits in brands that have been more traditionally associated with humans: honesty, empathy, loyalty, and service.