The Case for a Siri Speaker

Rumors have been circulating Apple will join Amazon and Google and make their own version of a smart speaker to compete with the Echo and Home speakers. Observing the commentary surrounding this rumor has certainly revealed many opinions on the matter, both in favor and against it. I even sense a debate inside Apple on whether a smart speaker is a fad or if it has staying power. I lean in the direction of Apple entering this market and competing with Google and Amazon and would like to make the case this product should exist.

Whole Room Audio
The sales of Bluetooth speakers over the past few years did not get much attention even though it was a growing trend. These small and affordable units hit a pain point for many consumers in that they did not have ample speakers in many places where they wanted to consume their music. Contrary to popular opinions, as these products were starting to gain popularity, most of them rarely left the house and were simply used in rooms where a sound system did not exist (which is most rooms in the average consumer home).

The home environment is very different than the public one. Those who express their pessimism over the smart speaker solutions often misunderstand the average consumer home dynamic. In common rooms like the living room, kitchen, patio, family room, etc., access to music is either very limited or non-existent. Bluetooth speakers filled this void and validated the desire of consumers to have access to music in more rooms of the house.

From the value proposition of whole room audio alone, this would be a smart play for Apple and adding the smarts of Siri opens up a rich ecosystem as well. Apple Music is an example of something that would benefit from this hardware significantly. As every available bit of data we have proves, hardware for Apple drives their services. Hardware built to uniquely take advantage of those services will drive it even further. It is not a stretch to say, if Apple sold a smart speaker, subscriptions to Apple Music would increase significantly due to Apple’s ability to tightly integrate hardware, software, and services.

Siri is always with You and Can always Hear You
One of the arguments against a Siri speaker is you always have your iPhone with you, making the iPhone the proper place for you to always access Siri. The flaw in this argument is, while your iPhone may always be with you, or not far from you, can it always hear you? The answer is no. When my iPhone is in my pocket, accessing Siri doesn’t work. When my iPhone is in the living room and I’m in the kitchen cooking, Siri can’t hear me. The counter-argument posits that Apple Watch or AirPods fill this hole since Apple Watch is always on my wrist or AirPods are in my ear. The reality, however, is not every iPhone owner will own one of those products in the foreseeable future. even if this argument is correct, the question remains: where does my music play?

This is where the home dynamic challenges Apple’s traditional and very individualized view of technology. The home is a shared a common environment, so to say everyone should just listen to their iPhone with headphones or AirPods on while walking around the house is a distorted view of what goes on in the home.

Here again is why the music experience and value of whole room audio alone makes a strong case for a Siri speaker to exist. But the challenge of putting Siri into something that can always hear you remains. A smart speaker can be purpose built to be a better listening device than your smartphone, watch or even earphones can be. This is one reason why the Amazon Echo is perceived as having better natural language processing than Siri. In a quiet, close range environment, Siri understands me as well as the Echo. However, the Echo hears me better in the normal dynamics of the home, thanks to how the microphones are built and tuned.

The Battle for the Smarthome
What smart speakers are showing us is the growing battle for the smart home platform. Voice control has hit its stride as the most convenient way to interact with your smarthome. I’d also add, voice is on the cusp of becoming the mechanism to eliminate the remote with our TV experience.

The battle for the smartphone will be one fought by the number of endpoints in your home which you can interact with in some way. Amazon wants to get an Echo in every room and so does Google. Using the assistant on your phone makes sense in many contexts, however. In the home, having other ways to interact with your smart assistant beyond just your smartphone, smartwatch, or earphones, only increases the potential chances to engage with a smart assistant service.

The goal of companies battling for smart assistant domination should not limit potential chances to engage but extend their assistants far and wide in order to make sure the consumer always has a convenient way to engage. If they don’t, they risk losing key experiences to their competition.

Google’s Machine Learning Backbone

Last year at Google I/O, the term AI was thrown around frequently. This year, Google didn’t use AI as much during their opening keynote but they did use the term machine learning much more. It was a subtle but important shift, which speaks to how Google is orienting themselves around their mission statement to “organize the world’s data”.

Google calls their smart agent the Google Assistant but, looking at a number of their demos, this is simply the front end naming they are using for a much smarter and transparent assistant that will manifest itself in many ways throughout Google’s services. One of the main apps they updated and introduced new features for was Google Photos. Here is a great example of Google’s smart assistant showing up in many ways but blending into the background while still adding value. The assistant will recognize people in your photos and suggest those people as ones you can share your photos with. Google photos can create mini movies or photo albums which you can have printed, and intelligently selects the best photos from a group for you to insert into a photo book. All of this is powered by the many years of work in machine learning Google has embarked on and the fruit of this labor is starting to show up in somewhat invisible yet useful ways.

Gmail is another place they quickly demonstrated this smart agent. In Gmail, users have the options to use smart replies — contextual recommendations of phrases or sentences to use in reply to an email. The machine learning algorithm understands the context of the text in the email and can offer common or learned responses from you based on your conversation style. As creepy as this sounds (and, in some ways is), it is also very useful for the end user. While I do believe a great many consumers do care about privacy, the historical evidence also suggests that convenience trumps privacy in many situations like this one.

What was most apparent to me during the day one keynote from Google I/O was how much machine learning/AI has become the backbone for virtually every service/app Google offers. It is deep in search, Google Assistant app, YouTube, Google Photos, Gmail, calendar, contacts, Allo, The Google Assistant app, Maps, etc., and becoming a staple of the ways Google is trying to add more value to users of their services. Again, things that save us time, are convenient, and truly useful can, in many cases, trump privacy.

Google Services Everywhere
As Carolina articulated well yesterday, the platform or consumer engagement battle has moved away from the core operating systems in many cases to the software and services layers of value. This is why Google will continue to be as aggressive as they can to put their services everywhere. Bringing the Google Assistant to iOS is evidence of this strategy and one similar to Microsoft, where they are looking to battle for consumer engagement and, in some cases, look to steal Apple customers in the core areas where Apple wants to compete as well. Google Assistant and Google Photos are the two that come to mind of areas where Apple cannot lose iOS customers to Google but where Google wants to get customers from Apple.

This strategy has deeper implications in developing or emerging markets where Android dominates the mobile landscape. Places like India, Indonesia, SE Asia, and many other countries are largely Android strongholds. The better these AI/smart assistant services become on Android and become deeper engrained into the core OS, the more likely it is those customers will stick with Google services/Android and make it harder for Apple to acquire new customers to the iPhone. If my thesis is correct that things like Siri or the full smart assistant experience from Apple become a stickier glue then the same is true for Google with Android customers.

This is likely why Google has re-launched their low-end emerging market Android play with Google Go, which has learned from the failing of Android One and will attempt to unify the low-end of the Android device market.

The Cloud TPU
Lastly, Google is tying this machine learning backbone to continued advancements in specialty server hardware and custom chips, whose sole purpose is to accelerate training networks with data and inferencing (querying) the data to turn it into actionable intelligence.

This new Cloud TPU board is furthering their work on the TensorFlow TPU, which was a custom designed ASIC (not an FPGA as many originally thought). That chip was only designed for inference, where this new board includes custom chips that also aid in training networks with data. From the sound of it, I’d say this board does include an FPGA off the shelf or custom design, but Google hasn’t confirmed this observation.

Google is, no doubt, doubling down on machine learning and all the backend network, software, training, inference, and anything else they need to be the leader in machine learning from beginning to end. All the while, their success depends on pulling their users, and getting new ones, deeper into their ecosystem so they can make sense of not just the world’s data but each individual’s as well.

The Rich Media Generation

Studying generations under 30 provides a unique challenge. One of the demographic patterns we observe around technology in particular is how habits change with specific life stages. From a purely anthropological standpoint, this seems obvious. Your habits during your youth/formative years are different than those in the middle of your life and both of those stages are different than the end of your life. What we have to dig into are the things that become ingrained at a young age and remain constant throughout all life stages.

When I first joined Creative Strategies, one of my key focus areas in consumer research was millennials. I was tasked with this mainly because I’m only a few years removed from being a millennial but also because our thesis was learning from them would give us signals to where the market will go in the future. We looked deeply at what technologies they were growing up with that would influence their demands on technology products going forward. This generation grew up with PCs, which is why we see such heavy demand for PCs in upper graduate education, the workforce, and even the broader consumer market as well. While millennials grew up with a notebook/desktop as a constant presence in their life, they also had smartphones by the high school/college level and a world of computing in their hands shaped their formative years.

The demographic behind them, Gen Z (18 or younger), grew up with a very different set of technologies. Not just notebooks/desktops being pervasive in their homes but also smartphones and tablets with all three being highly used for rich media (games, movies, TV, YouTube, etc). I’d argue this generation has grown up exposed to more rich media available to them at all times than any other generation prior. I believe this reality is the inertia behind more and more “consumer” tech companies driving a rich media agenda. Snapchat, Facebook, Google, Amazon, Netflix, Apple, etc., all seem to be stepping up their initiatives around rich media in some way or another.

As I was looking at research we have, I noticed Gen Z over-indexes above all other demographics in their interest in music, movies, and TV. An example of data showing why so many are moving into rich media and, in particular, the social media networks which consumes the vast majority of attention from this demographic. A world of rich media is at the fingertips of this demographic at any moment. How they consume this media and the vast amount of it available to them, no doubt impacts and influences their expectations.

What is fascinating to see is how companies are starting to innovate around the unique expectations and ways this generation consumes media. One of the most interesting to me is short story “books” built around an instant messaging medium.

The one I’ve been keeping an eye on is called Yarn by Mammoth Media. This simple app presents short stories and some add interactive participation as well to determine story outcome, all in a chat app-like interface.

Even for myself, I found this interface quite compelling because you aren’t sure what is going to happen next. However, for teenagers, this story format is one they are entirely comfortable consuming long forms of text-based media, given how central text messaging is to their daily life. Yarn, ChatBook, Tap, and a range of other chat stories continue to rise in the ranks in app stores and also are spreading like wildfire among high-school and Jr. Highers according to metrics shared with me by some of their investors. These new ways to consume media are the kinds of innovations necessary to gain and keep the attention of the media rich generation.

While Gen Z currently over-indexes on music, movies, and TV, they will undoubtedly begin to embrace other interests as their lives progress. Things like news, politics, family, career, health, etc., will become increasingly more attractive to them as they age since that is the pattern with other demographics as well. As Gen Z adapts and acquires more interests, their demands and expectations to consume and engage in these interests will remain. Shopping, politics, healthcare/doctors, banking, and more will all likely be fundamentally disrupted by this generation who will want to engage in these practices in dramatically different ways than we do today. I used to think millennials were going to be a disruptive generation and they have been compared when to Gen X, and Baby Boomers. But the more we look at Gen Z, the more I think this demographic will be the most disruptive by far as they move from life stage to life stage.

Wearables and Preventative Health

When the idea of health and fitness wearables starting hitting the market, I was a critic of the health side. Mostly because I understood the health angle to be more focused on the value proposition of health monitoring for people who knew they had health issues. This is still the case for many people today.

My father, for example, has Type 2 Diabetes and uses his Apple Watch to monitor his blood sugar in real-time. He has written extensively about it and how he uses Apple Watch as a health monitoring tool. Even in our own research of the market, we noticed the trend of people speaking with their doctors and being recommended a Fitbit or Apple Watch to monitor heart rate for irregularities, to make sure they were getting enough exercise to assist in lowering blood pressure or help to strengthen heart health due to a condition. Overall health monitoring is a key part of a wearable’s value today, if you happen to have a condition in need of monitoring.

This was the root of my initial criticism. For myself, mid-thirties and not having any health issues, I didn’t see the value in health monitoring. However, where things start to get really interesting for those of us without health problems is as wearables begin to play a role in preventative health.

A report came out recently from an app called Cardiogram, which is said to predict an irregular heartbeat with up to 97% accuracy. Which becomes interesting if you had not been previously diagnosed with an irregular heartbeat. Having such an app on your Apple Watch can aid in discovering health problems before the person has any symptoms and thus help lead to treatment which can prevent further problems. This is just the tip of the iceberg for how a wearable can aid in preventative health.

I recently went in for my regular checkup with my doctor and she realized my blood pressure was starting to tip toward the hypertension range. This is the kind of thing I learned you don’t want to let go on for too long as it can create heart problems in the future. So I ordered a Bluetooth-connected blood pressure monitor to track my blood pressure throughout the day, something I’ve never done before but I was intent on getting my blood pressure lower and to avoid medication. Fascinatingly, through a process of elimination, I learned that gluten is a key culprit in not just raising my blood pressure but keeping it high. The simple process of eliminating gluten got my blood pressure back down below 120/80. This was something I would have never found out or discovered had I not been monitoring my blood pressure and analyzing what was causing it get and stay high. It was an eye-opening experience in using technology for preventative health.

The next logical step is to embed all these tools to check and monitor our vital signs (heart rate, blood pressure, blood sugar, and more), equip software to do deeper analysis of our data, and machine learning/AI to be proactive about finding things which may be damaging our bodies and organs then notify us so we can make changes to prevent any real damage. Oftentimes, diet is a key factor in disease but many people have no idea what the food they eat is doing to their bodies. Being able to monitor and check our vital signs in real time can lead to those insights and ultimately help humans make lifestyle decisions which can keep them healthier longer and spot diseases which could be prevented if detected early.

We are getting closer to having the technology which can do this. Rumors have been circling that both Apple and Google have been stepping up their initiatives to bring glucose/blood sugar monitoring to their wearable platforms. Friends in the health tech and health sciences have been saying there are promising technological breakthroughs which have happened that can pave the way to bringing blood pressure monitoring to things like Apple Watch in the short-term future as well. As interesting, and valuable, as the Apple Watch and other wearable platforms are today, their ultimate upside will be as they transition from health monitoring to preventative health.

Computing Platforms and Value Creation

One of the more powerful indicators of an ecosystem that has not only staying power but one that can ward off disruption as well is when a platform creates value for more than just the company that owns it. Perhaps the best modern example of this is Apple’s iOS platform and Microsoft Windows. Each has succeeded a creating a deep and intricate value chain for partners and customers of the platform. Both have an exhaustive list of software and services which are worth the time of the developers and services providers to invest resources into because they provide value back to those investing in the platform.

I recently participated in an exercise with some folks at Harvard Business School and the Clayton Christensen Institute looking at ways value creation can play in solving the Innovators Dilemma and keep disruption at bay. The thesis they are creating is looking specifically at Apple, as they argue the breadth and depth of the value creation for parties other than Apple is larger than other platforms. An example of this would be with iPhone accessories. While it’s true there is an accessory industry for Android phones, Android’s more open platform approach leads to a massive amount of hardware diversity. For accessories like cases this is a problem since one manufacturer can’t make a case for every Android phone. Instead, they pick the ones they feel are the best selling or the ones most likely to have more valuable customers like Samsung S or Note phones, for example. The iPhone is a much easier product to design accessories like cases for because there are far less designs to have to worry about. Not surprisingly, this dynamic allows for an accessory business to thrive focusing solely on Apple products.

During this exercise, we talked about many companies with platforms that create value to look at how deep and wide that value extends. The main three we focused on — Microsoft, Palm, and Apple — are the ones who have platforms where businesses exist solely focused on one platform. What was interesting in this analysis was when we looked at Android. We were hard pressed to find many companies, certainly no large ones (the ones we did find were in more developing/emerging areas), with a business focused solely on Android. Everyone who is supporting Android with their hardware, software, or services is also supporting iOS or Windows as well. Android stands out as it is the platform with the most global users but also the one with the shallowest value creation web for anyone Google.

The more I thought about this after our working session, the more it made sense that Google is a bit of an anomaly when compared to the other platforms we explored. Google has a different business model. The key point is the biggest segment they create value for besides themselves is advertisers. This is a significant difference with Google’s platform vs. Apple’s and Microsoft’s. To carry this angle further, Microsoft and Apple have an end-user in mind as a customer of their platform where Google’s customers are advertisers.

This angle sets up something I’m calling the Value Creation Paradox. Interestingly, not just Google will have this problem. It seems Facebook may also suffer from this paradox. Both Google and Facebook provide a free service to customers and evolve their offering accordingly yet the needs of the end user are not their primary concern as they are with Apple and Microsoft. Facebook and Google must also continue to architect their platform to meet the needs of their true customers — advertisers.

Which truly begs the question: can a platform serve two master customers at opposite ends of the spectrum? Is the reason Microsoft and Apple have such deep webs of value creation for their ecosystem because they are laser-focused on the end customer needs and companies like Facebook and Google will struggle to build large ecosystems of value creation beyond themselves and their true customers which are advertisers? Ultimately, will free platforms, who over-index on creating value for advertisers due to the necessity of their business model, then be more susceptible to disruption under this new thesis? All questions we don’t have answers to since this new wrinkle with Google and Facebook are so new. But we do know from all available research that companies with a more end-user approach to their platforms are the ones who have succeeded at creating breadth in depth of value for not just themselves but thousands of partners as well.

I’m not saying Google and Facebook don’t provide a valuable service to end-users. They absolutely do. I’m only pointing out Google and Facebook’s customers are different than Apple and Microsoft’s and thus, the way they architect their platform/products will be with the agenda of serving their true advertising customers. The reality is this focus, as of today, has had an impact on their ability to create an ecosystem of value creation.

If some of these observations are true, then they carry with them implications for who is best positioned to succeed in the next platform wars, which may have something to do with cloud computing, artificial intelligence, augmented reality, and perhaps more but we aren’t entirely certain yet. Are the companies best positioned here the one with platforms laser-focused on the consumer as their customer or the ones with advertisers as their customers? No doubt, a key philosophical question worth continuing to flesh out if any of us are to place bets.

Developer Season: Legacy vs. Future Platforms

We are entering developer season, which will start in full force with Microsoft’s Build developer conference this week. Google’s I/O developer summit will start a few weeks after that and we conclude with Apple’s WWDC event in early June. There is one over-arching theme I will be very curious to see how each company addresses.

We are in the midst of a transition in our industry. We have deeply entrenched and fundamental computing platforms today with PCs and smartphones. These are not going away for a long time but we are at the end of the S-curve for both of these platforms. Each company must still address new tools to encourage the evolution of these legacy platforms. This means we will still see PCs and the cloud be a key part of Microsoft’s message this week. These platforms are where the money is today and developers still need better tools and technologies to continue to create for legacy platforms. Similarly, Google and Apple will address the PC (including tablets) and smartphone in their developer events. Legacy platforms are still where the money is and that will be reflected.

However, what I’m interested in is how these companies will address the future platforms which will be around artificial intelligence, augmented/mixed reality, or a mix of both. I would expect new tools around AI, voice-based smart assistants and, hopefully, forms of AI/mixed reality along with new developer tools to start experimenting more and building upon these potential future platforms. In both the cases of AI and AR, we are not really sure how they evolve or where they will go but, as with all platforms, it will be up to the developer to move it forward and take us into the future with their creativity and ingenuity. Microsoft, Google, and Apple are all in position to begin to make their plays for the future platform and developers sit at the center of this competition.

What to Watch for with Microsoft this Week
Looking beyond the Windows platform, I am looking for a few things from Microsoft this week which can strengthen their position in the cloud and start to attract developers for future platforms.

  • Windows Cloud: After Amazon, Microsoft’s cloud remains the second biggest player offering a breadth of options for the private and shared cloud. In this area, I am watching for them to dive into how they are empowering developers with machine learning and AI technologies as a part of the full suite of Microsoft Cloud developer tools. Machine learning is a critical part of this upcoming cycle we are heading toward and companies are looking for every competitive advantage they can when it comes to training their networks. Hopefully, Microsoft makes moves here that help keep their cloud platform competitive in the machine learning/AI era.
  • New Cortana Tools: Cortana remains the front end for Microsoft’s AI play and I’m hoping they open up more access to Cortana for developers to integrate their apps through a combination of backend cloud services and front-end integration to their software. I’d love to see Microsoft go as far and wide as possible with Cortana integration.
  • Mixed Reality: Microsoft’s play for Augmented Reality is called Mixed Reality and is showcased in their HoloLens solution. Microsoft has built a holographic shell around Windows that lets developers start to make Windows Holographic apps that can work with a PC with low-end resources like a mid-level processor all the way up to a powerful gaming desktop. Here, the tools Microsoft releases to continue to attract developers to make Windows Holographic apps is critical for their future. But so is getting more hardware partners to make headsets that take advantage of Windows Holographic apps. Without a path to a larger hardware installed base, it will be harder for Microsoft to attract developers. So, with regard to Mixed Reality and Microsoft, all eyes are on the software tools and the hardware ecosystem.

These are a few of the things I”m looking for that are more future forward from Microsoft this week. Of course, the big question will be how much they talk about the legacy platforms vs. the future, given the legacy ones are the most valuable at the moment. However, all of these companies, during developer season, need to be well positioned for the platform shift when it happens so they don’t risk missing out.

Wearables Gain Momentum

Perhaps the most surprising part of Apple’s earnings for many was the clarity for Apple Watch, with Apple stating sales had doubled year-over-year. Perhaps more interestingly, Tim Cook explained Apple Watch sales doubled in six of the top ten markets where it is sold.

What makes this worth paying attention to is the bump came in a non-holiday quarter. Many of us in the analyst community who study this market have been adjusting our models to reflect what vendors and retailers are telling us about wearables — they are increasingly becoming holiday season product cycles. Digging into what happened this quarter an interesting insight emerges. After speaking with many retailers and looking at the earnings and company narratives from Fitbit and Garmin, it is clear fitness trackers had a pretty bad first quarter.

This chart looks at Fitbit and Apple Watch sales for the past few years:

Speaking with retailers specifically, Fitbit has been the bellwether for the state of the fitness tracker market. It appears, at least with all available data we have at the moment, the fitness tracker market has been struggling. Even Garmin reported revenue declines of 3% in the fitness segment of their business. Garmin attributed this to the rapidly maturing market dynamics of the fitness segment. So both Fitbit and Garmin were down in their health/fitness segment but Apple Watch doubled? Something else must be going on in the market. I have two theories.

The first is Fitbit has served as a feeder for Apple Watch. This has been a thesis we have held ever since Apple Watch entered the market but have not had any data from the past few years to quantify it. The thesis is, a consumer buys a basic fitness tracker like a Fitbit, owns it for a year or so and validates there is value with a wrist-based device but wants more than what a basic fitness tracker offers so they opt for a smartwatch, in this case, an Apple Watch. We had shades of this thesis playing out at the end of last year as this was the path of Fitbit Blaze owners who moved up to Fitbit’s smartwatch from their basic fitness tracker. If we are looking at this just from a fitness perspective, it is possible consumers who were previous Fitbit owners, and established value with the idea, graduated up to an Apple Watch. This is why it may not be coincidence Apple Watch sales eclipsed Fitbit’s in a quarter for the first time.

The second theory is the market is moving beyond fitness. It’s possible consumers are starting to see the appeal of Apple Watch beyond fitness and more for the functionality. Not to say the fitness part isn’t important but perhaps it is not the single largest factor driving sales any longer. Apple Watch has also gained quite a bit of momentum and is starting to show up in greater volume in more markets. Perhaps the Apple Watch has proved itself to the skeptics now that they see the product on more people more often in public. Therefore, Apple is starting to gain the interest of people who once expressed none in the category. The continued marketing campaigns may have helped as well but there could be a snowball effect starting to happen with Apple Watch.

We know customer satisfaction remains high for Apple Watch so it stands to reason, as more and more people (roughly 25-30 million consumers) start telling others how much they love the product and find value in what it does that, at some point, we could/should see a tipping point. Perhaps that time is now.

Of course, we would need a few more quarters of sales to see which of these theories is playing out. 2017, in my mind, has now become somewhat of a defining year for this category to see if it can break out beyond fitness. Granted, we estimate the total market size globally for just a fitness-centric wearable to be between 200-300 million, still a decent market size. Ultimately, we are still bullish on the value but the upside will ideally go beyond health and fitness. That remains the story to watch.

Apple’s AirPods: a Hit With 98% Customer Satisfaction

Last week, we ran a follow up to the voice assistant research study we published last year around this time. Creative Strategies again partnered with our friends at Experian to see what has changed with voice assistants and explore some new products as well. This year, we added Apple’s AirPods to the study since Siri integration is a key feature of AirPods. In the next few weeks, we will publish more insights around what we learned about the Amazon Echo and Google Home but will focus this article on Apple’s AirPods. We used every available resource to track down as many AirPod owners as we could. In the end, we found 942 people willing to take our study and share their thoughts on Apple’s latest product.

Customer Satisfaction
The big story is customer satisfaction with AirPods is extremely high. 98% of AirPod owners said they were very satisfied or satisfied. Remarkably, 82% said they were very satisfied. The overall customer satisfaction level of 98% sets the record for the highest level of satisfaction for a new product from Apple. When the iPhone came out in 2007, it held a 92% customer satisfaction level, iPad in 2010 had 92%, and Apple Watch in 2015 had 97%.

While the overall satisfaction number is remarkable, a second question we asked of these owners stood out even more. We used a standard benchmark question called a Net Promoter Score, which ranks a consumer’s willingness to recommend the product to others. This ranking is on a scale of 0 to 10 with 10 being extremely likely to recommend and 0 being not likely at all to recommend. It was this number that surprised me. Apple’s Net Promoter Score for AirPods came back as 75. To put that into context, the iPhone’s NPS number is 72. Product and NPS specialists will tell you anything above 50 is excellent and anything above 70 is world class. According to Survey Monkey’s Global Benchmark of over 105,000 organizations who have tested their NPS, the average is an NPS of 39.

This incredibly high Net Promoter Score intrigued me for another reason. We know from profiling questions that most Apple AirPod owners fall into the early adopter category. This is not surprising since early adopters are generally the first among people to buy new technology products. We discovered something interesting in the first few sets of early Apple Watch research, as well as our studies on Echo and Google Home — early adopters tend to not give products high recommendations. The first few studies we did on Apple Watch had a lower NPS as did the Amazon Echo and Google Home. Early adopters tend to understand they buy products early and, oftentimes, they do not feel those products are ready for the mainstream. Certainly, a product’s NPS ratings goes up or down over time, but our experience and years of data on this subject are clear that early adopters rarely give new technology products a high NPS. AirPods broke the mold in this case as even the harshest critics and users of new technology (early adopters) felt AirPods are ready for the mainstream.

We asked respondents to briefly explain their ranking and an analysis of the most frequently used words by respondents were:

  • Fit
  • Magic
  • Sound Quality
  • Convenient
  • Love
  • Good Sound
  • Battery Life

While those were some of the most common words used by our participants, many general themes in the write-in section were quite telling. Folks raved about the pairing process with their phone. Many indicated how surprised they were by how well they worked citing bad experiences with prior Bluetooth headphones. Another common theme I spotted in the write-in section was consumers saying they did not realize how convenient and useful wireless headphones were since AirPods were their first pair. Many indicated they liked the AirPods even more than they thought they would. That’s always a sign of a great product.

While there was some negativity in the write-in section, it was mostly around concerns or issues with fit or connectivity problems. But these were certainly an extreme minority.

Feature Satisfaction
We took the study a little deeper as well, looking at customer satisfaction around certain features.

I charted the top six features with the highest satisfaction. The number that stood out most in this top list of features is comfort and secure fit. There was a great deal of debate about AirPods when they first came out that not having a cable means it will make them not stay in or people will lose them easily if they fall out. We can now dispel that myth as Apple has designed a product that fits most people’s ears and, more importantly, fits securely and do not fall out for the vast majority of owners. Only 4.6% of AirPods owners who participated in the study said they were dissatisfied with the fit and ability to fit snugly.

Consumer Sentiment for AirPods
Lastly for the AirPods part of our study, we added some general sentiment questions to see what kinds of feelings or emotions consumers agreed/did not agree with regarding AirPods. A couple of stand out answers are worth mentioning.

  • 84% of respondents strongly or somewhat agree that using just one AirPod at a time makes sense in certain situations. This means AirPod owners are actively using just one AirPod at a time in some contexts. Not necessarily a new behavior if we reflect back to the Bluetooth earpiece days for making calls, but certainly an additional value proposition to Bluetooth headphones as a category.
  • 88.97% of respondents strongly or somewhat agree AirPods consistently pair to their iPhone as soon as they put one in their ear. While Bluetooth reliability has come a long way, we know many Bluetooth headsets on the market do struggle with pairing consistently quite often. This data point suggests instant pairing reliability of AirPods is quite high.
  • 82.5% of consumers would like more control over their content by tapping the AirPods to do things like turn volume up or down or skip to next song. Right now that can be done manually or by asking Siri to turn the volume up or down or skip to next song but it appears some way to have more control of media by touching or tapping the AirPods themselves is desirable.
  • 82% of respondents strongly or somewhat agree AirPods are their favorite Apple products launched in recent memory. What makes this question interesting is the fact that, while our respondents mainly lean to early tech adoption, we do not have a massive group of hardcore Apple fanatics. Knowing that makes this question all the more interesting. Overall, our respondents feel Apple has released one of the best products in a long time.
  • 62% of respondents strongly or somewhat agree AirPods are causing them to consume more audio content (music, books, podcast, etc) than before they owned AirPods. This is fascinating as it could indicate AirPods become a catalyst for more of Apple or third party services.
  • Lastly, we wanted to see how much our participants in the study still defaulted to old habits or didn’t trust AirPods enough to completely go wireless and fully ditch their wired headphones. To our surprise, 64% of consumers somewhat disagree or strongly disagree they keep wired headphones handy just in case AirPods don’t work.

Apple has accomplished a rare feat we have not seen in many years of studying owners of brand new technology products. They have succeeded at delivering a product with an industry best customer satisfaction rating and Net Promoter Score rating. Those two things alone highlight the quality of AirPods overall and the reality that there will be very few unhappy owners of Apple’s latest product.

How Augmented Reality will Sneak Up On You

A couple of interesting trends are emerging I believe will bring augmented reality experiences to the masses only, they will probably not recognize them as Augmented Reality.

As smartphone cameras get better and get more local processing and as machine learning advances, it is clear the biggest trend coming to smartphones is the camera sensor as a platform. This could come as companies like Facebook, Google, Amazon, and even Apple build up core experiences around the smartphone camera. This could mean giving developers access to create stores to sell lenses, filters, or effects. Or it could show up in standalone apps which is the case today. But, looking at a number of apps utilizing some early technology from both the camera sensor and server side machine learning, we have a few examples I believe point the way.

First, Snapchat. While some may want to argue with me, the reality is hundreds of millions of millennials using Snapchat every day are having an augmented reality experience. Snapchat’s lenses already fit the description of AR and do so in a fun and entertaining way. This, for example, is augmented reality.

That is not just a digital overlay of glasses or hat on my head and face. The digital overlays recognize aspects of my face and react digitally, in real time, as I talk, move, or make facial expressions. This lens also adds elements to my face to make me look older to fit the character. All done digitally, both locally on the device and on Snapchat’s backend server side processing.

This is augmented reality but in a way most never associate or think about. It’s just entertaining and AR just happens to enable this fun.

Next, Memoji. There is an app called Facetune which uses machine learning to let people completely change how they look in the app. You can give yourself a new nose, chin, cheeks, teeth, etc. It’s a powerful app that uses a great deal of machine learning to deliver fascinating digital tweaks to a simple photograph. The company behind Facetune launched an app called Memoji that turns any selfie into an animated emoji. Here is an example.

Facetune launched this app for free to highlight some of the face manipulation technologies their app/service include. Both apps provide an augmented reality experience by definition but are disguised as a service to alter your appearance or express yourself by turning a selfie into an emoji.

Another app getting some exposure lately is Faceapp. This app uses any selfie and can alter it into several different modifications digitally. You can add a smile, change genders, make yourself look younger or older. While not perfect, it is another great example of an augmented reality experience in disguise.

These are a few simple examples but you can see where this can go. As both myself and Carolina have mentioned, imagine how this can impact commerce when you can take a picture or video of yourself and see how clothes or complete outfits look on you with extreme accuracy. Paint companies, and even Home Depot and Lowes, all offer experiences where you can use your smartphone to take a picture of a room in your house and test different paint colors in real time without ever having to leave your home.

All of these experiences show up as useful, or fun, and won’t be sold or positioned as “augmented reality.” Instead, their value will sneak up on consumers and the adoption of these technologies will be quite natural. But the commonality here is how consumers’ first experiences with AR will happen through the smartphone camera. As these cameras become more capable and do full depth sensing and 3D scanning of physical space, developers and services providers will be able to go farther using the camera to extend AR experiences into new areas.

The camera sensor is emerging as a platform and smart companies will take advantage of that in big ways over the next few years.

The Retail vs. e-Commerce Chicken and Egg

There was an interesting article in the Atlantic that dove deep on how online shopping is causing such turmoil for brick and mortar retailers. It’s a good, long read. A paragraph stood out to me as the key to this story.

On a whiteboard, he drew a series of lines representing the rising share of online sales for various kinds of products (books, DVDs, electronics) over time, then marked the years that major brick-and-mortar players (Borders, Blockbuster, Circuit City and RadioShack) went bankrupt. At first the years looked random. But the bankruptcies all clustered within a band where online sales hit between 20 and 25 percent. “In this range, there’s a crushing point,” Hariharan said, clapping his hands together for emphasis. “There’s a bloodbath happening.”

Using this logic, we would assume that any retailer selling clothes, shoes, and books are the immediate losers in this scenario. I think we saw what happened with big box book retailers like Barnes and Noble who scaled back locations dramatically thanks to Amazon — only boutique book shops remain for the most part. The latter is a key observation for how retail may ultimately change in every category. We have yet to see the full extent of the pain retailers like Macy’s will feel. Fashion as well is likely to end up less about big box stores and more about unique experiences.

Retail’s chicken and egg problem is a fascinating dynamic when we look at the data point shared in that paragraph. The only reason e-commerce can and will get to 20-25% sales share for a specific category is because consumers can go in, try on, decide what they want in a physical store but then go online and find a better price. We know the top two categories for online shopping are clothes and shoes and we also know those two categories have the highest number of “showroomers” (people who find what they want in stores then buy online).

The challenge for retailers is e-commerce players will always beat them on price. Retailers have to factor in their cost per square foot of retail, employees, etc., making their overhead a challenge to compete with online players. We know from our research the single greatest driver to purchase a product online instead of in a store is price. This means physical retailers are at a fundamental disadvantage. Yet, many of the top e-commerce categories depend on physical space for consumers to go try on, size up, touch, feel, and ultimately decide what they want to go home and buy on the internet.

How this gets solved seems tricky, at least in the categories I mentioned (I should add beauty products to as well given they are in the top five of monthly online purchases by consumers). If a store like Macy’s tries to match a product price and offer that price to the customer while they are in the store, they can’t survive because they lose money on that strategy. If an online retailer like Warby Parker or Bonobos or others who started online, then opened physical stores, keep this strategy, it becomes harder for them to continue to offer the best price. They themselves become subject to low-end disruption in the same way they played a role in disrupting big box retail.

One possible solution is visual computing. The idea has been thrown around before but I think I can finally see how it becomes a reality. I’ll use Warby Parker as an example. Say you are on the Warby Parker app looking for some new glasses or sunglasses. The app has an option for you to use the smartphone camera and see what different models look like on your face. Some apps try this today but fall short because they don’t actually know the dimensions of your face and how to best compare those with the dimensions of the glasses. But advancements like Intel’s Real Sense cameras have dramatically increased camera sensors’ ability to do true 3D depth sensing. Pair this with some visual computing machine learning and I can use my smartphone camera to scan my face and have accurate size and width dimensions to then compare the dimensions of the glasses against. When this happens, you will be able to see exactly how the glasses will look on your face. It may not be as good as trying them on and seeing how they feel but it’s going to come darn close. Similarly, Amazon it seems has been trying to work out ways it can use your smartphone camera to do a 3D scan of your body and then show you how clothes would actually fit.

All of this requires more advancements in camera sensors to have full 3D depth sensing capabilities and backend machine learning to match the body with the digital item. As I said, these have all been ideas tried out before but we didn’t have a clear path to know how we get there. Now it seems we do. And, by the way, these experiences fall into things that qualify as augmented reality as well.

In the Mobile-First Era, Don’t Forget the PC

The last few years have seen a fascinating shift in storylines as well as data around the storylines. Many of us who research consumer trends in the industry focus quite a bit on the endpoint because they serve as gateways to broader software and services experiences. For this reason, our eyes have been squarely on studying what people do on smartphones, PCs, and tablets. Since 2010, when the iPad hit the scene, the role of the PC has come under great scrutiny. Is it a dying form factor? Is it something consumers no longer need? Is the smartphone the only device humans will use someday? Will the tablet kill the PC? These questions, and many more, have been a focal point in the consumer hardware discussion.

The debate is relevant because it informs businesses on where to focus their resources. It is abundantly clear the smartphone is the central and primary computing device for billions of people. Knowing this means any business should no doubt employ a mobile-first strategy with their software and services. Mobile-first simply means to assume the smartphone is the primary engagement point with your product. Of course, this will vary by the type of application. Something like Netflix for example, is primarily consumed on larger screen devices like PCs, TVs, and tablets. Microsoft Office and other enterprise or commercial applications are primarily used on PCs and Macs. In all these cases, where the application and workflow are better on larger screens, they still have a complimentary mobile experience. We live in a multi-device world where most humans in developed markets like the US, Europe, China, etc., use both a PC and a smartphone for varying things throughout the day. But, because the smartphone is the computer we have with us at all times, it is crucial for even PC-first applications to have complimentary experiences on the smartphone.

But, when it comes to consumer software and services, the strategy gets flipped. Mobile-first, or mobile-only, has been the mantra for developers and consumer software strategists for the last few years. But I’d like to argue that even many of these mobile-only apps or solutions can benefit from a complimentary PC experience as well.

Interestingly, global data tells us the PC is still used heavily on a daily basis across nearly all demographics.

As you can see, the amount of time spent per day on PCs is still significant. Our estimates are that ~1.3 billion people personally own a PC, compared to the nearly three billlion people who own a smartphone. The global average of time spent using a PC each day by those ~1.3 billion people is 3.54 hours per day. What became clear a few years ago was the smartphone was not necessarily taking time usage time away from the PC but was adding to the total time spent using devices and being on the internet each day by its owners. Looking back through years of data, daily time spent using a PC has stayed roughly flat while daily time using a smartphone has grown dramtically. People seem to be using both devices independently and in tandem to browse the web more, communicate more, play games more, watch videos more, be on social media more, shop more, etc. It is also important to note that globally, millennials still spend a lot of time on their PCs as well. The fallacy is to think the only way to reach millennials is with a mobile app. While a mobile app is the primary way to reach millennials, the data suggests it would be a mistake to not also offer them some way to engage with your software or services while they are at their PC as well.

The PC is still an important engagement point even in the mobile-first era. However, the strategy for bringing mobile experiences to the PC needs to understand and utilize the device’s benefits. The worst thing any developer or business can do is just duplicate their mobile strategy for the PC. These hard lessons were learned when many apps and services failed because they just duplicated their desktop experiences on mobile and did not take advantage of the smartphones unique advantages.

If you agree with my logic, the debate will turn to whether just make a website or make an app. To me, the path is clear — make an app. Both Windows and Apple offer app stores and, in many cases, the ideas I’ll share make more sense as an app rather than a browser experience. Take Twitter for example. Twitter is a mobile-first experience and a primary engagement point. Yet, the website and their own desktop client app are pretty poor in comparison to other client side apps for macOS and Windows 10. I’d argue Twitter is losing a significant engagement point on the PC, given how much time people spend browsing the web for news and entertainment while on their PCs. Thinking of millennials, Snapchat is another example that comes to mind. We know millennials spend a lot of time on their PCs and millennials with Macs engage quite heavily with iMessage on their Mac. The value of being able to text and message friends from the device you are in front of, in this case the PC, makes a lot of sense. Snapchat’s chat app is the sticky point for many millennials. I can argue even if Snapchat brought their chat client to the desktop it would make a lot of sense. The counter-argument is to say it isn’t that hard to pick up your smartphone and open the app and do what you want to do. However, having observed a range of consumers who have both desktop and mobile apps of the same software, there is no arguing that being able to do what you want or need to do on the device you are using is far superior. While it seems easy enough to just pick up your smartphone to use an app you don’t have on your desktop, it misses the reality of the increased friction in that experience. I use Slack for example for a wide variety of work and personal things and if Slack was not available on the desktop I would not use it nearly as much as I do.

I can see many cases where Instagram could benefit from a smart desktop app. Maybe Facebook could as well or, at least, bring Facebook Messenger to the desktop as an app. Most companies want to just offer a browser-based PC experience but, in that scenario, your experience just gets buried in the many number of tabs consumers have open at any given time. Don’t make your PC experience just a tab in a browser — it will get lost. Apps offer rich notifications and a more visual experience. For this reason, I think the best strategy to re-engage with your customers on the PC is via an app, not a website.

Being mobile-first is the right strategy. Prioritize the mobile experience when you know that is the primary way your customers will engage. Just don’t forget your customers also spend many hours per day in front of their PCs and, in some cases, it is wise to think about how best to offer a complimentary PC experience in the hope you can increase your total engagement time with your customers.

Semiconductors are Eating the World

Marc Andreessen famously articulated that “software is eating the world”. However, embedded in this observation is the central point that, before software can eat the world, semiconductors must eat it first. Thanks to Moore’s Law, this is exactly what is happening.

We are on a journey to not just connect unconnected people and groups to the internet, but to also connect the unconnected things. Smoke alarms, thermostats, refrigerators, coffee pots, toasters, basketballs, tennis rackets, cars, water heaters, fuse boxes, golf clubs, pet collars, band aids, thermometers, toilets, light bulbs, door locks and more, all the things previously unconnected are being outfitted with small, power efficient semiconductors.

It is a reality that semiconductors are eating the world which, in turn, makes it possible for software to thrive. As semiconductors permeate the world, it creates opportunities for software and services to break into previously uncharted territory.

The Invasion of Silicon

IC Insights estimates that, around 2017, one trillion semiconductors (integrated circuits and opto-sensor-discrete, or O-S-D, devices) will be shipped annually to mark a new milestone but also the new normal.

bulletin20150217Fig01

Notable milestones in semiconductor unit shipments include:

– 1987: semiconductor unit shipments first breached the 100 billion mark
– 2006: exceeded 500 billion units
– 2007: exceeded 600 billion units

When it comes to transistors, the numbers get even more staggering. It is estimated that, since the invention of the transistor, ~2.9 sextillion transistors have been shipped. That is two with 21 zeros after it. I also came across this tweet last week.

Eight trillion transistors per second. This number is only going to increase and at magnitudes of orders annually. That means, one trillion semiconductors annually by 2017 and this number will only grow for the foreseeable future.

By 2025, there will likely be roughly five to six billion people connected to the Internet and over 50 billion connected products.

the-swedish-innovation-week-the-networked-society-6-638

What is fascinating about where we are heading is today we can, for the most part, count the things we own that connect to the internet. Over the course of the next decade this will become impossible. Nearly everything around us will be connected in some way, shape or form. This is the result of semiconductors eating the world. Semiconductors will enable a connected world where almost everything becomes technology, or at least enabled by it. In this future, technology disappears because everything is, essentially, technology. All of this thanks to the pursuit of Moore’s Law.

The Relentless Pursuit of Moore’s Law

Yesterday marked the 50th anniversary of Moore’s Law. What always struck me about Moore’s Law is it’s more of an observation than a law. Even more interestingly, it could have ended at any time. It has been more of a benchmark and a goal to pursue for Intel. Intel has worked in relentless pursuit to keep Moore’s Law alive and the entire technology industry has benefitted from it. Other semiconductors like AMD, and the host of companies in the ARM ecosystem, have benefited from the pursuit of advancing process technology so semiconductors can invade the world. Thanks to the pursuit of Moore’s Law, we have computers that once filled up rooms that now fit in our pockets and on our wrists.

Screen Shot 2015-04-19 at 9.38.43 PM

This pursuit of Moore’s Law will continue to make it possible for semiconductors to eat the world. As it happens, the unconnected world becomes connected. As Moore’s Law continues, connected things get smarter. We have a computer with two billion transistors in our pockets. In five to six years those same pocket computers could have eight billion transistors. What would we do with a pocket computer with eight billion transistors? We are going to find out.

Moore’s Law will continue to benefit the industry. Intel and Samsung both have semiconductors at the 14nm process technology. Next will be 10nm and then 7nm. With each step forward, semiconductors will continue to eat the world and provide the mechanism for software to follow on and eat its fair share of it too.

Will Moore’s Law end? This remains the question we don’t have an answer for. But the threat of the end of Moore’s Law is nothing new. In all likelihood, the economic benefits of Moore’s Law will end, or at least be challenged, before the science does.

Interest in the Samsung Galaxy S8 – Consumers Have Spoken

With the upcoming availability of the Samsung Galaxy S8, we were curious what consumers thought of the device and how interested they are in purchasing one. We teamed up with SurveyMonkey Audience to do some research on US consumers to better understand their interest level of the phone and its newest features. We also explored whether the Galaxy Note 7 battery issues were a factor in consumer interest and we threw some questions in around voice assistants for good measure. In all, we surveyed 923 consumers. These are the key findings.

Note 7 Battery Impact
In our annual fall smartphone study, we explored the issues surrounding the Note 7 and whether or not the media coverage and awareness of the battery problems led to a large amount of negative sentiment. In that study in the of fall 2016, we learned most consumers (62% to be exact) did not see the Note 7 battery fires as a deterrent to purchasing a Samsung smartphone in the future. It was even higher looking at existing Samsung smartphones owners where 73% said the Note 7 issues would not deter them from purchasing a Samsung smartphone in the future. Knowing Samsung customers are a loyal bunch, we feel both those percentages are good news for Samsung.

In this most recent survey, we found similar results. This study revealed 53% of consumers said the Note 7 issue has not impacted their interest in the Galaxy S8, while 17.7% said they were not sure or undecided. Only 28% said definitively the Note 7 battery problems negatively impacted their interest in the Galaxy S8. Again, knowing Samsung owners are a loyal bunch and are the most likely candidates to buy a S8, only 16% of existing Samsung smartphone owners said the Note 7 problems are impacting their interest in the new device.

Overall, I’m confident the data we have from the fall, and this most recent data, suggests the Note 7 fires were never a big roadblock for consumers to begin with and even less so now. This should alleviate any concern over the Note 7 fallout impacting the sales of any Samsung smartphones released this year.

Interest in the Galaxy S8
Overall, interest in the new S8 seems low. However, I expect Samsung to begin their marketing blitz and carriers to start heavily advertising the S8 in the coming weeks and months which will help with interest over time. The more important breakdown to this question is to look at interest in the S8 by existing smartphone owners and those looking to upgrade in the next three to six months.

Interest remains highest among existing Samsung smartphone owners than any other group of consumers. More importantly, drilling down on folks who expect to upgrade their smartphone in the next 3-6 months, 36% of upgraders in that time frame are interested in the Galaxy S8. Interestingly, 21.7% of upgraders in that time frame stated they were extremely interested. These are consumers looking to upgrade sooner rather than later and are not interested in waiting until the fall to upgrade. Again, the fact 36% of consumers are looking to upgrade are interested in the new Galaxy S8 bodes well for Samsung.

Looking deeper at consumers who indicated they have interest in the S8, the features that stand out most were the Infinity Display/Larger Screen (27%) and the eight-megapixel front facing camera (23%). Bixby, the more hyped feature of the S8, scored relatively low with only 13% of interested consumers saying it was the feature that interested them the most. That leads into an interesting finding we have on voice assistants.

Voice Assistants are not yet a Purchase Driver
While the usage of voice assistants like Siri, Ok Google, Alexa, and Cortana have certainly been rising, they still have a long way to go to convince the market of their greater value. It may not be a surprise but voice assistants are not the main feature or reason anyone is buying a smartphone. The earlier points I made confirmed purchase drivers are still mainly the camera and the screen. We wanted to get a sense of which voice assistant US consumers feel is the best so we included a question in our study. We asked respondents which voice assistant they felt was the best. Below are the results.

First, Siri has the lead which speaks to a greater portion of US consumers having tried Siri compared to an alternative in order to form an opinion. Just looking at iPhone owners, the sentiment that Siri is the best jumps to 46.6%. Among Android owners, 36% said Google’s Assistant is the best. Interestingly, 11.9% of Android owners said they thought Siri was the best voice assistant while only 6.3% of iPhone owners said Google’s voice assistant was the best. But here is where I felt things got interesting.

This, like all the questions in our study, was not multiple choice. We asked consumers to choose the answer that best fit their opinion. We gave them a simple “none of the above” option and we gave them the chance to pick that they think “voice assistants are useless”. Surprisingly, 29.4% of respondents deliberately chose the option that they think voice assistants are useless. Consumers are a tough crowd, with a lot of convincing to do.

Lastly, we asked consumers what they thought of Bixby and whether they expected the new Samsung smart assistant to be better, worse, or the same. Interestingly, 13.2% of the respondents showed some confidence in Samsung and Bixby saying they think it will be better than Siri, Google Assistant, and Alexa. 38% said they felt it will be the same, while most consumers 43% said they don’t use any of the voice assistants so they have no opinion.

As we dug into this study, we uncovered more insights than I have time to share but the key here is Samsung still remains a solid brand despite the Note 7 issues. Consumers are still showing interest in Samsung’s latest products and the new innovations they are bringing to market. While voice assistants still have a lot of convincing to do in order to get consumers to trust them and use them more, there is enough potential here for Samsung to keep investing in Bixby since voice interfaces and voice assistants will become more valuable and desired features in the coming year.

I’ll have more to share on voice assistants and the voice UI soon as we are about to field our Voice Assistants 2.0 research study.

Millennials and Apple

One of the key narratives I regularly encounter is surrounding the Millennial demographic and Apple. In our most recent millennial study, we included some sentiment questions about Apple we think give us some insight into the current mindset of millennials around Apple hardware. There are many data points collected by us and many other researchers to suggest Apple hardware is still highly desireable by this demographic and repurchase intent for iPhones in particular remains high among millennials. I have no data to suggest this dynamic will change and there was certainly a time when this group held Apple in the highest regard from an innovation standpoint. One of our questions was intended to see what 18-24-year-olds specifically felt about Apple when it comes to innovation.

In our last millennial study, we asked a specific question: “Which statement best reflects your feelings around Apple products?” We gave the respondents multiple choices which all reflect statements we heard frequently when we interviewed this demographic on this particular topic. The results were fascinating to dig into, particularly when you look at gender and platform. Here is the chart and answers to the specific choices we gave them.

It’s fascinating that millennial men have a much more critical eye and opinion of Apple’s perception as an innovator as compared to women. After seeing this data, it makes sense that the most vocal personalities on Twitter making noise about this are either men or millennial men. This chart also speaks volumes of a fascinating difference between the relationship men and women have with technology.

This chart below stood out to us as well. While the answers options we gave them were not necessarily limited to the notebook/desktop category, it was interesting to see this demographic’s answers when we looked at notebook/desktop operating systems — Mac and Windows owners.

Fascinatingly, Windows owning millennials feel like Apple should leave PCs to Microsoft. Another interesting statistic is nearly 25% of both Mac and Windows owners feel Microsoft is catching up with Apple in PC design and innovation.

I look at all of this in two ways. First, if the iPhone 8/X/Pro or whatever it is called is all we hear its shaping up to be, I think the perception of Apple and innovation will go up. However, that sentiment may still only be applicable to iPhones and some of the sentiment we see and hear around Macs may still exist until Apple revisits or does something to bring a breath of innovation back to the Mac. I still maintain the PC is an incredibly important platform. Even among those under 30, who still spend about four hours a day on their PCs or Macs, roughly about the same amount of time they spend on their smartphones per day. So the PC/Mac is still an important category from both a usage and engagement standpoint.

My other takeaway from this study isn’t charted here but has to do with Apple going beyond hardware with this demographic. Apple’s software (apps) only occupied two spots in the top ten of daily apps used by them — iMessage at #6 and Safari at #8. Both are good apps to have this demographic using daily but why not Mail? Gmail was number 5. This is why I hope efforts like Clips and other new apps for media creation and expression pay off for Apple. I hope Apple can engage this demographic in more ways than just hardware since that will be necessary in the years to come.

Machine Learning Platforms and the Cloud Computing Era

There is a battle looking for the next platform. Specifically the next platform that attracts developers. Right now, the race is on to create machine learning platforms that attract customers to use the latest generation of tools and commit to a cloud platform with machine learning advantages over the other cloud platform. From an open model, the leaders here are Amazon, Microsoft, and Google.

What makes this next battle interesting, is it will be the first time the platform truly moves from the native device to the cloud. When we have spoken of platforms before, we have mostly talked about them running on a local device, where the main brain (CPU) lives. This conversation is now shifting to cloud platforms where millions of brains (CPU,GPU) live. Machine learning will set at the center of this new cloud platform battle and we are just getting started.

The big three attacking this right now in Amazon, Microsoft, and Google are racing to build network training models, proprietary software, backend services, and in some cases even proprietary hardware in order to differentiate their offering and attract companies to build solutions on their cloud platform. While the cloud platform battle has been going on for some time as we think about Amazon AWS, and Azure, what’s new to the angle is the machine learning technology which will be the most important differentiating factor to anyones cloud computing platform going forward.

Without diving too deep on the common learning techniques, it is important to note two hurdles I have been watching as the leaders try to jump over. The first being labeled data. Most data, particuarly on the visual side for things like photos and videos, require some form of tagging so a computer knows it’s looking at a picture of a dog, cat, tree (what kind of tree), street sign, pedestrian crossing the street, parked car, etc. To train networks today, massive amounts of labeled data is necessary. When it comes to things that are already text based it gets a little easier. Hence things like predictive text on your keyboard, or search query, are a bit easier to acquire data and train a network. The holy grail is to develop a solution that allows a computer to be trained with unlabeled data, and even learn from that unlabeled data to be more accurate from larger samples to learn from.

The second is communal data. This one is tricky because of issues around privacy in many pure consumer use cases where machine learning and AI will be mainstream someday. The best way to learn from things like autonomous cars, or the data we generate on our notebooks, tablets, smartphones, smartwatches, etc., is to gather all the user data together and train the network from the community using the service. Obviously, people are sensitive to the idea their car is being tracked and data is collected, or what they type on their keyboard is being tracked and sent to servers. However, the reality is the best way to train a network will be from its community of users so dealing with this prvately will be essential.

All three companies I mentioned take the needed steps to ensure privacy, and some take more precautions than others. But the techniques they employ are crucial for the end result to still be useful data to build better network models.

A recent blog post from Google, highlights a few things the are doing that struck me as particularly interested in both areas I mentioned above. Google is using an approach of federated learning to make communal training faster so users can enjoy the benefits quicker. This paragraph provides some brief detail:

It works like this: your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.

Google mentions a couple of interesting roadblocks today they are looking to overcome and both have to do with speed of training. The primary one they addressing is bandwidth latency. This process they are employing is a cross between on device training, then taking that training up to the cloud to better train the computer. In order to do that with the slow and unpredictable upload speeds we are accostomed to, they compress the file, encrypt it, then uncompress on the server side to allow the data to train the cloud network. The main goal is to quickly, and in near real-time, train the network using data collected on the device in use. This sentence nails what they are after and the hump they are trying to get over:

But in the Federated Learning setting, the data is distributed across millions of devices in a highly uneven fashion. In addition, these devices have significantly higher-latency, lower-throughput connections and are only intermittently available for training.

It really is an interesting approach, and I’ll be curious how Amazon or Microsoft respond when they are not as privey to as much end user data as Google, particularly on device smartphone data. Regardless, both those companies are working to create cloud platforms, similar to Google, with the goal being for businesses to run their services on which consumers will use on their smart devices.

Where’s Apple in This?
No doubt many of you are asking where Apple fits in all of this. Apple seems to be taking a different approach and Apple’s puts them in an interesting position. The first thing we have to realize is Apple does not sell a cloud computing platform to host and run backend cloud services and apps for businesses. So any approach Apple takes with machine learning, will have to someway play nicely with the backend machine learning systems a business uses to run their cloud computing platform on. Whether I develop apps, enterprise software, banking solutions, etc., I’ll be on either Amazon, Google, or Microsoft’s cloud. Except for China where their are different players.

So Apple is in a unique position that they own a great deal of the customer experience from their customer base, want to provide the value of machine learning and AI to their customers which will show up mostly at an OS layer, and have to work with the machine learning computing platforms where their software ecosystem uses on the cloud backend.

Here is where I think Apple may have to move iOS to a bit more of a hybrid native and cloud OS. They will also have to make sure parts of Siri play nice with whatever backend machine learning tools their software and services ecosystem is using. Let’s use Netflix as an example. Netflix has a machine learning backend. They have chosen a specific machine learning tool set to use but want to make sure that gets implemented into their iOS app. Similalrly Apple will want to make sure Siri can tie into that data in order to let Siri help me find and control Netflix to get what I want to watch. Just saying Netflix can make a Siri app is not enough when they are using something like Amazon Lex on the backend or Google Tensorflow since both of those are specific languages. Apple will have to somehow let those developers who host their solutions on the cloud computing platform of someone else work with their more localized platform. I’m not sure how they navigate this yet but it will be interesting to watch.

My main takeaway from all the discussions I have had with companies leading this space, is we are certainly on the cusp of the cloud truly becoming the centralized computer platform. I expect we will see more of this computing done on the backend, with machine learning the realy value driver, and operating systems continue to become more tightly integrated with these cloud computing backends. How long I’m not sure, but advancements come more than once a year as companies race to own the cloud computing platform with machine learning and eventually AI the central figures.

Maintaining the Mobile Payments Thesis

Mobile payments, Apple Pay in particular, have come under some criticism lately as reports indicate adoption is slower than some expected. A recent report by the Wall Street Journal cited estimates of 13% of iPhone owners globally have tried Apple Pay. I’m not going to split hairs on this number but data and research we have suggests a global number of 15%. I take these two data points to be accurate and in the ballpark of global adoption of Apple Pay.

As with most reports, they are privy to the topline numbers but often lack the deeper context of full research data or market insight. What I won’t dispute is that adoption remains low due to reasons like lack of retailer support, consumer awareness, security concerns and, perhaps primarily, the weight of behavioral debt. The last point is made clear in a mobile payments study we did at Creative Strategies last fall where 40% of consumers stated their biggest barrier to adopting mobile payments is, “It is still easier to use my credit card.” As I’ve stated many times when outlining how new technologies are adopted, old habits die hard.

There are some deeper market insights we look at in order to see the true potential of a particular technology. One is satisfaction, an area where Apple Pay crushed the competition with a 92% satisfaction rate when its closest competitor was 66% in Android Pay. But, more importantly, we studied the sentiment of those using mobile payments of Apple Pay, Android Pay, and Samsung Pay. Overall, once people start using one of these solutions, they want to start using them everywhere. This is always a good sign for a feature. It creates such a positive experience you want to use it more and get frustrated when you can’t. A good feature raises your expectations of an experience. That is exactly what mobile payments do.

Mobile payments and support at retail is inevitable. It is simply a matter of time before you can pay with your smartphone everywhere you go. It will take time for consumers to embrace this but the research we have suggests that, once they do, it will become a habit quickly. While companies like Apple, banks, credit card companies, and retailers can all do a better job communicating the benefits and driving greater awareness of mobile payments, there is another angle I think may help — support on the web for things like Apple Pay.

I tweeted last night that Apple Pay is as magical on the web as it is at retail. While finding Apple Pay on the web is still rare, when it is found and you are looking at an item, I’d argue Apple Pay availability dramatically increases purchase conversions. I had the powerful experience last night of a friend on Twitter sharing a t-shirt he had designed and the link to purchase it. From my iPhone while laying in bed, I checked out the design and noticed Apple Pay was supported for the transaction. That single feature was the catalyst to get me to convert the purchase right then and there. The inclusion of Apple Pay made the transaction frictionless. I otherwise would not have spent the time to enter my credit card information and billing/shipping address from my iPhone to complete that purchase. If I wanted it bad enough, I would have gone back to my PC in a day or so but it’s entirely possible I would have forgotten by then. So, in this case, Apple Pay support encouraged a spur of the moment purchase which may not have happened otherwise.

This experience combined two powerful things: a social recommendation and ease of transaction via Apple Pay. The social recommendation increased my interest in the product and the easy checkout with Apple Pay increased my willingness to complete the transaction. These will be critical parts of the mobile commerce era, which will be significantly more interesting and lead to a boom of new opportunities never found in the desktop e-commerce era.

Interestingly, I came across a report from CPC Strategy which did some research on Facebook ads and transaction completion. While 60% of people who use Facebook regularly had not clicked on an ad in the past 30 days, a surprising 33% had. That’s a surprisingly good transaction rate against the industry standard. The stat I found more interesting was 26% of those who had clicked on an ad on Facebook completed a purchase. Again, a surprisingly high conversion rate.

While the report does not break out the percentage of these clicks which took place on mobile over desktop, we know mobile usage of Facebook is significantly higher than desktop usage so it’s safe to assume a good portion of those clicks and transactions took place via a mobile device. Part of my thesis on mobile commerce/payments includes the support of Apple Pay, Android Pay, Samsung Pay, or Amazon Payments into the integration of ads which I believe can boost that 26% number even higher.

As more ads and commerce sites support mobile payment methods like Apple Pay, Samsung Pay, Android Pay, Paypal, and Amazon Payments, I’m convinced that, when they are paired with social recommendations, we will see conversation rates increase dramatically.

It truly is shocking how much friction has existed in e-commerce for so long. This is one of the main reasons e-commerce is still only 11-12% of US retail commerce annually. It has been stuck in this range for years. I’m convinced that, once technologies like Apple Pay, Android Pay, etc., become the standards, e-commerce’s percentage of retail will grow radically and quickly.

In case you are interested, here is the direct link to the Facbeook ads study I mentioned.

Computing and Eliminating Complexity

As I look at the last thirty years of the technology industry, a few important observations stand. Allow me to share some visuals from a presentation I gave a few years ago at the Postmodern Computing Summit.

The first significant observation is that computers are getting smaller. The advancements being made in microprocessors are allowing us to take supercomputers from form factors that once fit in a closet to ones that now fit in our pockets and, in the future, be worn on our bodies.

The second observation is what that computational curve of processing evolution enabled. Each step function in computational power brought with it a step function in ease of use and eliminated layers of complexity which existed previously.

In software, we went from text-based user interfaces to more visual ones and, each step of the way, computers were embraced by more and more people. Each step function enabled new step functions in scale as easier user interfaces, smaller more personal form factors, and lower prices, enabled what I call “Computing’s S-curve”. This curve, and perhaps all the step functions I mentioned, culminated with the smartphone which has brought computing to people who have never owned computers before. Now, roughly 2.5 billion people have pocket computers.

We remain on a journey to connect the unconnected. We still have rouhgly three billion people on the planet who have yet to get a smartphone. Many of those people can and will benefit from the value of the internet and instant communication. We still need to continue to make cellular plans more affordable for those making less than $5 a day. For this same group, we need to solve battery problems so they don’t have to charge it every day because that costs money they need for other things like food. Another key development needed to connect the unconnected is language support. This is where I think we are on the cusp of another elimination of complexity in computer user interfaces as we embrace voice-based UIs.

The Voice UI will play an important role in making computers even easier to use than they are today. While the screen is not going away, I do believe voice will become a key way to interact with our smart devices and get more value from them. In the case of rural parts of the world where their languages are not yet supported (or even if it is, they may not be able to read), they will be able to speak to it. Machine learning advances will likely help us advance training machines to learn new languages which, in turn, will hopefully help our smart devices support hundreds of languages and dialects not yet supported.

Watch kids who can’t read yet use Siri or an Amazon Echo to search the internet, play videos, and more. Similarly, as touch-based interfaces made it possible for kids to engage with computers, voice will open up even more possibilities and depth to their computing experience.

What’s interesting is voice will not open up new computing possibilities for everyone, even people with a strong grasp of computers today. I’d posit that, for a normal human, not a hardcore techie or early adopter, many of them do not get the most out of the computers they have on their laps or in their pockets. It sounds crazy to say this but even our smartphones still overserve the basic needs of billions of people. But the question in my mind is, how can we get these people doing more with their smart devices? I believe voice-based interfaces will play a key role in that experience.

Imagine being able to use your voice while looking at a screen to edit video, photos, create visual representations of data and, more importantly, have the computer engage back with you to help you learn, discover, and do more.

The next step function of computing, which will eliminate another layer of complexity, will come as we build computers that can see and hear. We are in the early stages of this right now as the machine learning era is taking off. That will lead us to have some of the most powerful and interactive computers with us at all times as nearly everyone on the planet will have an incredibly smart personal assistant.

Welcome to the Think.Tank

As you may have noticed, our site has a shiny new design as well as a new brand for our subscriber service called The Think.Tank. There are a few changes I want to make sure are understood about what to expect from the Think.Tank and the new dashboard.

First, the biggest change coming to the Think.Tank is the new blog. While we will still publish featured articles daily, we also wanted the opportunity to comment on more of the breaking news and add our perspectives and insights on the same day they happen. The best way to understand this change is that previously our site was designed around the idea of just one subscriber article a day. We wanted to move away from that model, so we were not limited to just one subscriber post a day. The new blog is where additional daily content will be published.

We have also added a new research section, with posts that contain more data or research. We will continue to add to this section as new information crosses our desks or we find public data worth commenting on.

Also featured on the dashboard is the Friday News Roundup and, coming soon, will be a section where we will explore more media such as exclusive video analysis for our subscribers.

Finally, we have a new feature called Inbox. The goal of Inbox is to offer an informal QA for subscribers to ask questions of our writers. Some of these questions may inspire blog posts, featured articles, or just help get a burning question answered for a reader. You will notice the comments section is now gone from Tech.pinions. An additional goal with the Inbox is to continue to give a public channel for readers to engage with our writers.

Apple’s Semiconductor Prowess

Apple has been taking great strides to control and design more of their own proprietary semiconductor components. One piece many of us believed was on the horizon was the GPU. Apple is a stakeholder in Imagination Technologies and has used their GPU architecture for some time. Earlier today, Imagination sent out a press release informing the world, and specifically their investors, that Apple will no longer be licensing their GPU IP and will instead be going to market with their own proprietary GPU. There are a few interesting angles to explore with this new information.

First, some have suggested this is a ploy by Apple to drive the stock price down of Imagination Tech in order to then pursue a buyout. While there is certainly a chance of this clever strategic move, I tend to think it ranks much lower on the list of likely scenarios. One main reason is it’s risky and there is no guarantee the strategy works when you have a number of other tech companies like Microft, Google, Amazon, etc., who would also be very interested in pursuing a buyout — Apple would not be the only one at the negotiating table.

Imagination wanted to get out in front of this situation with the press release and worded it quite specifically to attempt to appease investor concerns. I read a lot of buy-side analyst commentary on Imagination and this narrative that Apple would drop them as an IP licenser was foremost among investor concern. Imagination knew this and that losing Apple would negatively impact their stock. That is indeed what happened.

This part of the Imagination statement is one worth digging into:

Apple has not presented any evidence to substantiate its assertion that it will no longer require Imagination’s technology, without violating Imagination’s patents, intellectual property and confidential information. This evidence has been requested by Imagination but Apple has declined to provide it.

Further, Imagination believes that it would be extremely challenging to design a brand new GPU architecture from basics without infringing its intellectual property rights, accordingly Imagination does not accept Apple’s assertions.

Imagination wants to give investors a sense that, even if Apple is going down this route, they will ultimatlely still need some Imagintion IP in the end. It is certainly true that building a GPU from the ground up will likely require someone’s IP in some way for Apple, unless of course they acquired the IP they need via some other acquisition they have made. Another possible scenario is Apple has determined that ARMs graphics solutions, named Mali, is now good enough they will start customizing their GPU solution using the Mali IP under the ARM architectural license they already have with ARM.

What many people don’t realize about Apple’s solution with Imagination’s GPU IP is they have been doing a great deal of customization of the Imagination GPU IP to fit their own needs. Apple does not use the generic, off-the-shelf GPU solutions from Imagination but instead customizes it heavily. The work they do to customize their GPU around the Imagination IP is not that different than the work they do to develop their own custom ARM solution using the ARM architectural license.

The folks at Anandtech empahsize this point in an article they wrote today about this news:

Previous to this, what little we knew of Apple’s development process was that they were taking a sort of hybrid approach in GPU development, designing GPUs based on Imagination’s core architecture, but increasingly divergent/customized from Imagination’s own designs. The resulting GPUs weren’t just stock Imagination designs – and this is why we’ve stopped naming them as such – but to the best of our knowledge, they also weren’t new designs built from the ground up.

I concur with their assesement that Apple had not built new designs from the ground up but had been customizing the designs under the license agreement they had with Imagination.

It is unlikely Apple is starting from scratch. However, it is unclear what base IP they will use for this solution. We won’t know until Apple comes to market with the new designs and even then it may not be clear. But if anyone can make this switch and support it with software development APIs and build a base for the future software and services, it is Apple.

Why Design a GPU?
I briefly want to touch on why this important and the main reason why it was predictable. The GPU is actually more important than the CPU with where we are headed in the future. In fact, an underappreciated observation of the last five years in silicon CPU/GPU architecture design is the increasing amount of space the GPU has been taking on the SoC every year. More and more software is leveraging the GPU and, as our core computing expereinces become more visual, the GPU becomes even more important.

The GPU is the heart of not just graphics but advances in imaging technology, visual computing, recognizing images and pictures, machine learning, and AI, to name a few. The GPU sits at the heart of all the things the industry is getting excited about with the future of technology. The GPU is essential in autonomous driving as well.

Apple is putting themselves in a position to own their GPU solution and, as a result, not be beholden to the design direction of a third party whom they may not have total influence over. This puts Apple in an incredibly strong position strategically to control their own destiny in hardware, software, and services — even more than they do today.

Everyone in the semiconductor industry applauds the quality of Apple’s CPU architecture and they have established themselves among the leaders in CPU design. With the leading edge solutions they have developed today and how that has yielded them a significant hardware and software advantage with only designing the CPU, imagine what they will be able to do when they also fully design the GPU.

The Flaw in Tech Companies’ AI Strategy

There is a lot of talk about artificial intelligence; sadly, not a lot of substance. We are in such early days of AI that I prefer to talk about what is happening in Machine Learning since that is the current stage of the AI future we are in. We are currently trying to teach computers to see, hear, learn, and more. Right now, that is the focus of every major effort that will someday be considered AI. When I think about how tech companies will progress in this area, I think about it from the standpoint of what data they have access to. In reality, data is the foundation of machine learning and thus, the foundation for the future of AI. I fully expect many companies to turn data into intelligence and use what they have collected to teach machines. There may very well be a plethora of specialized artificial intelligence engines for things like commerce, banking, oil and gas, astrology, science, etc., but the real question in my mind is who is in the best position to develop a specialized AI assistant tuned to me.

While several of the tech companies I’m going to mention may not be focused on personal AI, I’m going to make some points within the lens of the goal of personal AI vs. a general purpose AI. The question is, who is developing Tony Stark’s version of Jarvis for individual consumers? The ultimate computing assistant designed to learn, adapt, and augment all of our weaknesses as humans and bring new levels of computational capabilities to the forefront for its user.

With the assumption that Facebook, Amazon, Google, Microsoft, and Apple are trying to build highly personalized agents, I want to look at the flaws and challenges each of them face in the present day.

Facebook
Facebook no doubt wants to be the personal assistant at all levels for consumers. However, like all the companies I’m going to mention, they have a data problem. This problem does not mean they don’t have a lot of data — quite the contrary. Facebook has a tremendous amount of data. However, they have a lot of the wrong kind of data to deliver a highly personalized artificial assistant for every area of your life.

Facebook’s dilemma is they see only the person the consumer wants them to see. The data shared on Facebook by a user is often not the full picture of that person. It is often a facade or a highly curated version of one’s self. You present yourself on Facebook the way you want to be perceived and do not share all the deep personal issues, preferences, problems or truly intimate aspects of your life. Facebook sees and is learning about the facade and not the true person behind the highly curated image presented on Facebook.

We share with Facebook only what we want others to see and that means Facebook is only seeing part of us and not the whole picture. Certainly not the kind of data that helps create a truly personalized AI agent.

Amazon
I remain convinced Amazon is one of the more serious players in the AI field and potentially in a strong position to compete for the job of being my personal assistant. Amazon’s challenge is it is commonly a shared service. More often than not, people share an Amazon Prime account or an Amazon account in general across their family. So. Amazon sees a great deal of my family’s commerce data. However, it has no idea if it is me or my wife or my kids who are making the transaction. It’s so often blatantly clarified for me as I’m surfing Facebook or some other site that is an Amazon affiliate and I see all the personal hygiene and cosmetic ads for items my wife has searched for on Amazon. Nothing like killing time on Facebook and seeing ads for Snail and Bee facial masks presented to me in every way possible.

While Amazon, with their Alexa assistant, is competing for the AI agent in my life, it has no idea how to distinguish me from other people who share my Amazon account thus making it very hard for Amazon to build a personalized agent just for me when it observes and learns from the vast data set of my shopping experience but does not know what I’m shopping for versus what my family is shopping for. The shared dynamic of the data Amazon is getting makes it hard for them to truly compete for the personal AI. However, it does put them in a good position to compete more for the family or group AI than the individual.

Google
Google is an interesting one. Billions of people use Google’s search engine every day, but the key question remains, how much can you learn about a person from their search query? You can certainly get a glimpse into the context and interest at any given time by someone who is running a query and, if you keep building a profile of that person from their searches then, over time, it is certainly possible to get a surface level understanding. But I’m not sure you can know a person intimately from their searches.

No doubt, Google is building a knowledge profile of its users on more than just their search queries as you use more of Google’s services. Places you go if you use Maps. Conversations you have if you use their messaging apps and email, etc. No doubt, the more Google services you use, the more Google can know and learn about you. The challenge is that, for many consumers, they do not fully and extensively use all of Google’s services. So Google is also seeing only a partial portrait of a person and not the entirety which is necessary to develop a truly personal and intimate AI agent.

Microsoft
Microsoft is in an interesting position because they, like Google and Apple, own an operating system hundreds of millions of people use on a daily basis for hours on end. However, I would argue the position Microsoft is in is to learn about your work self, not so much your personal self. Because they are only relevant, from an OS and machine learning standpoint on the desktop and laptop, then they are stuck learning mostly and, in many cases only, about your work self. Indeed, this is incredibly valuable in itself and Microsoft is in a position to develop an AI designed to help you be productive and get more work done in an efficient manner. The challenge for Microsoft is to be able to learn more about the personal side of one’s life when all they will see and learn from is the work side.

Apple
Lastly, we turn to Apple. On paper, Apple is in one of the best positions to develop an agent like Siri to fully know all the intimate dynamics of those who use Apple devices. Unlike Google, it is more common for consumers to use the full suite of Apple’s services from Maps, to email, to cloud storage and sync, to music, to photos, etc. However, Apple’s stance to champion consumer privacy has put them in a position to willingly and purposely collect less data rather than more.

If data is the backbone of creating a useful AI agent designed to know you and help you in all circumstances of your life, then the more it knows about you the better. Apple seems to want to grab as little data as possible, with the added dynamic of anonymizing that data so they don’t truly know it’s you, in order to err on the side of privacy.

I have no problem with these goals but I am worried Apple’s stance puts them in a compromised position to truly get the data they need to make better products and services.

In each of these cases, all the main tech companies have flaws in their grand AI strategy. Now, we certainly have many years until AI becomes a reality but the way I’m analyzing the potential winners and losers today is on the basis of the data they have on their customers in order to build a true personal assistant that adds value at every corner of your life. While many companies are well positioned, there remain significant holes in their strategy.

Unpacking This Week’s News – Friday March 31st, 2017

Microsoft Announces Windows 10 Creators Update Release Date – by Bob O’Donnell

The latest version of Windows 10—the Creators Update—will be officially released on April 11, according to a Microsoft announcement this week. Apparently, Windows Insiders can start to get early builds now and you’ll be able to manually download the update as of April 5. The new update is meant to be more of an incremental one, though it offers several new features and capabilities Windows users will appreciate.

From a fun, creative perspective, Windows 10 Creators Update includes a new 3D version of Paint which allows you to easily create 3D objects that, eventually, will be usable in augmented reality-type environments. Unfortunately, some of the promised new 3D, VR, and AR capabilities originally touted for this new version won’t be appearing for a while, including 3D support in Office and the low-cost (starting at $299) VR-capable headsets due from major PC players including Lenovo, HP, Dell, Acer, and Asus. Those headsets are now expected in the second half of the year. I’m guessing we’ll see some additional 3D, VR, and AR-focused software updates happening then as well.

For gamers, there are a number of new enhancements, including a built-in game broadcasting feature called Beam that will tap into the increasingly popular interest in live game streaming and chat functions. In addition, there’s a new Game Mode designed to optimize PC game performance.

On the browser front, Microsoft is including an updated version of their Edge browser that incorporates numerous security and anti-phishing features. In addition, with a newer generation PC, Edge will support 4K streaming from content sites like Netflix and Amazon Prime (with a 4K monitor, of course).

The new version of Windows 10 also has enhancements to privacy and security settings, an improved setup process, and more parental controls. One interesting new addition to multi-device security models is the ability to tie any smartphone—Android-based, iOS-based or Windows-based—to your PC and, when you walk away, it will automatically lock the PC. I expect to see more multi-device, multi-OS functionality coming in later versions of Windows (and, frankly, all major operating systems), but this is an important step in the right direction.

Overall, the Creators Update won’t dramatically drive new upgrades to Windows 10 from existing Windows 7 or 8 users, nor will it likely drive cross-platform switchers. It will, however, reassure Windows 10 users Microsoft is looking out for their interests and continuing to make updates that will keep their OS of choice fresh and new.

Apple Hires Boost Services Effort – by Ben Bajarin

The Information broke a story about a new hire at Apple, a former YouTube exec, who is coming in to lead their services efforts under Eddie Cue. A short snippet from the post:

Apple has hired Shiva Rajaraman, a veteran product manager from YouTube and Spotify, to help shape the company’s video strategy, according to three people briefed on the hiring. Mr. Rajaraman is the latest in a string of outsiders Apple has hired in recent months to strengthen its position in video, where it lags rivals like Amazon.com and Netflix.

Mr. Rajaraman will also be working on the look and feel of Apple Music, Spotify’s top rival, as well as on Apple’s other media products, including news and books. He’ll report to Eddy Cue, Apple’s senior vice president of Internet software and services.

While this sounds like it has a video angle, it looks more like a larger services push across the diverse set of Apple offerings. That being said, Apple’s video efforts are still the most unsettled and in need of some form of strategy going forward. This move is encouraging as the services equation become more important to Apple’s overall business. There can only be upside to adding a seasoned person with expertise in cloud services to the Apple team.

Microsoft Stores Selling Customized Editions of Samsung Galaxy S8/S8+ – By Carolina Milanesi

A day after the launch of the Galaxy S8/S8+, veteran ZDnet reporter Mary Jo Foley reported Microsoft had confirmed to her their stores will be selling customized versions of the new Samsung’s flagship phones. Mary Jo was told directly by Microsoft the phones sold in the stores will be activated right at the time of purchase, connected to wifi and automatically populated with Microsoft apps, including Office, OneDrive, Cortana, Outlook, and other Microsoft apps.

Considering the Microsoft Edition devices will retail for $749.99 and $849.99 for the Galaxy S8 and S8+, respectively, I very much doubt they will generate many extra sales for Samsung. Yet, the announcement is significant because it provides a platform for Samsung to show its newly launched DeX solution that allows Galaxy S8/S8+ to be docked and run Office as well as Android apps on a larger screen.

For enterprise, the partnership gives weight to the solution which might be interesting for organizations who have a highly mobile workforce and who are looking at lowering management costs of multiple devices while not compromising productivity on the go.

For Microsoft, DeX offers a good solution for users to be engaged with Office and other apps on the go for those enterprises who are not looking at addressing mobility needs by issuing tablets or 2-in-1s to their workforce.

DeX is not a desktop or even a laptop replacement, so there is only a limited impact on other Microsoft hardware partners. This is really a solution for those users that currently would not take a laptop with them but would instead struggle working from their phones or postpone their work till they got to their PC.

All in all, this seems like a win-win for everyone involved.

The Wearable Shift and the Future of IoT

At Baselworld this year an interesting shift in the world of watches has become clear. Hybrid smart watches — watches analog by design but include sensors for basic fitness and movement tracking. These watches keep their iconic design but add some new features to display movement. A good example is the latest release from Garmin called the Vivomove which displays steps and move goal in bars on the side of the analog display.

These analog watches are being positioned as the best of both worlds. Keeping the timeless design of analog for customers who are not ready for true “smart watches” and adding the digital benefits of fitness and health tracking. I believe we will see more and more experimentation with hybrid smart watches, perhaps even from Apple as it becomes possible to have a glass digital screen overlay an analog design. This would let you have your analog watch design but have the glass display go full digital for many of the smart watch features of the Apple Watch. This technology is demonstrated today but years away from being commercially affordably in a small form factor. Examples of this hybrid glass and digital display are mostly seen in high end retail today like this.

Certainly, one of the more interesting technologies to come to all types of products that use glass once the full digital display can be embedded into the glass itself.

The examples set by these hybrid smart watches are providing us a bridge to the future for a large percentage of the population not ready to go full digital in every aspect of their life. This hybrid approach, mixing analog and digital, is something I think we will see move to all forms of smart devices, not just smart watches. In particular, all things we lable IoT.

It really is only a matter of time before many of our common objects become smart but it is likely to happen without them going fully digital. When you buy a new crockpot, it may be IoT-connected but still have an analog display and analog buttons. Similar with your next coffee pot or refridgerator or oven. There is a great deal of value in having an appliance IoT-connected for things like diagnostics, remote accessibility, remote monitor and control, and more. But adding smart to many of these objects can still happen in a hybrid fashion, giving folks the same familiar experience with analog products but adding the benefits and features of digital.

This approach makes sense, given we are in a period of transition from an analog to a digital world. Generations grew up analog and will not immediately adopt full digital solutions for many of the every day objects they know and love. There is no question those generations who grew up digital will adopt the pure digital smart devices as they buy new products in the future. What will be interesting to see is how the non-digital generations start to embrace more digital solutions once they see the value. For example, with the hybrid smart watches, it will be interesting to see if the demographic that buys these starts to get acquainted with and appreciate the digtal functions and then move to more digtal smart watches in the future, once they see the value.

The hybrid approach makes sense for the world we live in today and we will see these hybrid solutions set the stage for the pure digital ones in the future. These solutions are the right baby steps to take to bring the value of digital technology broadly to groups of people who are a bit more resistant to change and prefer familiarity. This is a great balance and I believe we will see a lot more experimentation in the coming years.

An Important Step Forward in Apple’s Strategy

As I was thinking broadly about Apple’s pricing shift with the iPad, it’s clear there is an important strategy coming into play for Apple. Any good analysis of Apple notes their value is in the ecosystem, the comprehensive whole of their offerings, not just one product, feature, or service. The more Apple products and services you use together, the better the whole experience gets. For Apple, the value runs deeper than just multiple hardware sales per customer. It’s also in understanding the customers who have more than one Apple product also spend more on apps and services and thus, are higher ARPU customers. While I don’t have the specific data to validate this, I’m confident owners of multiple Apple products are also more loyal than customers with only one Apple product. The key point is Apple’s market position strengthens when customers go deeper into their ecosystem.

With that in mind, it is now important to note that approximately 55-60% of Apple’s customer base is iPhone-only according to my model. Meaning, less than half of Apple’s total device installed base has more than one Apple product. The iPhone is the entry point (it has the largest installed base of any Apple product by far) and Apple’s strategy must include leveraging that relationship with an iPhone owner to get them deeper into the ecosystem by adding more Apple devices to their life. This is where their pricing strategy comes into play.

Neil Cybart from Above Avalon outlined the fascinating pricing strategy by Apple in this post. He expands on the observation that how Apple tends to be more aggressive on price in certain areas — Apple Watch, Airpods, even iPod — looking at it historically. This view stands in conflict to a general narrative that Apple’s product prices are always “expensive” or “high-end”. Furthermore, Ben Thompson expands on this pricing tactic with this sound observation:

…Apple is far more aggressive with pricing in these non-essential product categories: of course the company wants to provide a superior user experience and confer status, but it also wants to convince people to buy into the category in the first place. To put it in pricing terms, I strongly suspect the degree of price elasticity for a product is inversely correlated to the necessity of said product (that is the less you need a product the more sensitive you are to price).

Reflecting on both their points and looking back historically on how we have seen Apple operate on pricing strategy, it does seem clear there is a difference between how they price products they view as core or essential, like iPhone, and ones that are not. This pricing strategy is designed to get at the core of what I outlined. Apple wants to make it easy for customers to add more Apple products to their portfolio. If we follow this logic, Apple views the iPhone as essential and things like iPad, Airpods, Apple Watch, Apple TV, etc., as accessories. Sound logical — until I start to think about the Mac.

Is the Mac Essential or an Accessory?
The biggest question in my mind in Apple’s pricing strategy is related to the Mac and whether the Mac is an essential core product or one that is an accessory. There are sound arguments both ways. We can certainly argue the Mac is more like the iPhone and an essential product and I’d bet the ~100 million Mac owners out there would agree. Apple’s pricing of the Mac is more like the iPhone as it is priced well above the ASP of Windows PC market by roughly 2x. Yet, another argument can be made that Apple’s posture with the Mac is more it is a niche or non-essential product not everyone needs.

Steve Jobs’ own comments — the laptop/desktop form factor is a truck or a specialized category — would seem to support this view as well. Yet, reconciling their Mac pricing strategy within the pricing strategy of other categories continues to be a question mark using the essential vs. non-essential pricing concept. Obviously, the Mac is one of the more expensive products for Apple to make and that is a fundamental understanding. However, that does not mean Apple couldn’t create a lower-cost, entry-level Mac to entice more customers into adding a Mac to their portfolio of products. This is exactly what I’m going to argue they should do.

The shift in Apple’s iPad pricing suggests to me they understand the iPad may not fully be the PC replacement Apple hopes it would be. While they are advertising iPad Pro more aggressively with that goal, we will have to wait until the end of the year to see if these new campaigns have any impact. My gut tells me they won’t and I truly want my gut to be wrong because I believe so much in the iPad. However, nearly every data point we have from our research overwhelmingly confirms the strength of the notebook/desktop form factor for consumers. For this reason, I believe Apple should start to think about the Mac pricing strategy the same way they do with iPad and other products more complimentary to the iPhone. In fact, I’d argue the combination of Mac + iPhone is stronger in value than of iPad + iPhone. Clearly, all three together is the trifecta of computing experiences.

I’ve long argued Apple could easily jump to 10-15% share of the PC market with a simple pricing strategy for the Mac which included a price point of around $799. For example, take the current Macbook Air and offer it at $799 and I have a strong suspicion they could dramatically take share and impact the similarly priced Windows PC category. Better yet, add retina to Macbook Air and update the specs to modern components, keep it at $799 or even $899 and Apple would dramatically alter the competitive landscape of what we are seeing happen with Windows PCs and gain significant share in the worldwide notebook sales market.

Interestingly, even the iPhone has more approachable entry-level products. The Mac is the only line that doesn’t. Again, my conviction comes from a great deal of recent data over the last year or so which overwhelmingly confirms the importance of the traditional notebook form factor. If Apple was to offer an aggressively priced entry-level Mac, I’m confident this product would only strengthen their ability to attack the 60% of their base who only has an iPhone.

Apple may have thought iPad was the way to do this and that may be true. However, adding a similar strategy with Mac pricing will only help their efforts that much more.

Three Millennial Tech Myths Busted

A core thesis we have about the future of technology here at Creative Strategies centers on a younger demographic. Because of that, much of our continued research on the industry leads us to do dedicated studies of the Millennial demographic to help us understand the unique function of technology from this cohort. We recently completed a study spanning hardware preferences, software behavior, collaboration techniques, communication techniques, and more specifically on the 18-24-year-old millennial segment. This group is largely still in college and about to enter the workforce with an established set of collaboration and cloud-based workflows. An essential part of our study was to understand how this demographic is using the combination of hardware, software, and cloud services to be productive.

As part of our study, we discovered some interesting data which busts many myths associated with this demographic. For reference, this study was taken by 1,446 respondents within the millennial demographic and over 90% are 18-24. This study also spans over 40 college campuses.

Myth #1: Millenials are Done with Facebook
Perhaps one of the most popular myths is millennials don’t use Facebook anymore or, if they do, it is not central to their social media usage or an app that gets used daily. We asked millennials which apps they use on a daily basis. To our surprise, Facebook is still king. 89.35% of millennials still use Facebook on a daily basis. This percentage was the highest of all the apps we tested. Next on the list was Snapchat with 76.36% using it daily followed by Instagram at 73.79%.

While all current data we have suggests engagement time may indeed favor things like Instagram and Snapchat over Facebook, there is no doubt millennials still have Facebook as a daily part of their behavior. The more we study how millennials and even Gen Z use different social networks, we observe how each seems to serve a purpose. None appear to replace each other entirely but they all offer something a little different. This demographic has no problem juggling them effectively for their needs.

Myth #2: The PC is Dead to Millenials
Perhaps the most interesting hardware discovery our study made was how important the PC still is to this demographic. Through a variety of questions and behavior scenarios we tested, we came to the realization the PC is still the form factor this demographic uses and prefers to get “real” work done. While this demographic is certainly the most comfortable using their smartphone to do things that classify as “work”, more so than older demographics, they still prefer their notebooks for a variety of productivity, creativity, and entertainment use cases.

One of our questions tested a specific scenario to understand how they may weigh hardware preferences in a particular situation. We presented them with a scenario where they were going on a trip and were going to be working on a project while they were traveling. On this trip, they could only take one device for all their needs. We asked them to choose if they would take their notebooks, smartphone, or tablet. We were certain it was a no-brainer and the majority would want their smartphone. To our surprise, 42.46% said they would choose their notebook. The smartphone barely beat the notebook with 42.92%.

The scenarios we tested showed the strength of the laptop form factor when any level of “work” or “school project” was involved. Based on many of the write-in comments on why they choose the device they did, it was clear that, had there not been work involved, there would have been no contest with the smartphone as the clear winner.

Myth #3: Face to Face Meetings are not Desirable
The strength of myth is questionable but I hear it frequently from senior managers at large corporations. Many Silicon Valley tech companies which employ large numbers of millennials also note how prevalent video conferencing has become with this generation. There is no question the idea of face-to-face meetings may be suspect or questionable with this demographic — they feel it is a waste of time. However, our study shows they still view face-to-face meetings as the most efficient way to collaborate.

We examined the preferred collaboration methods at different stages of a project for millennials and found face-to-face meetings were viewed as the most useful and preferred for both the planning and brainstorming part of the project and the check up/review stages. Collaborating through things like Google Docs, or a messaging client like iMessage were sufficient to keep making progress. However, when it mattered at critical stages, nothing replaces a good old fashioned meeting–even with millennials.

The more we study different demographics, the more we see quite distinct behavior patterns depending on their life stage. Most of the “myths” I’ve heard are observations of either young millennials or Gen Z who have much more time on their hands. The contrast is quite stark once you observe millennials in college, entering the workforce, or in their late 20s already working and starting a family. Technology remains constant at all stages. Technology is almost always the answer to many problems or challenges with this demographic. However, the ways it is implemented and used may vary widely by life stage and this may always be a constant as well.