A Demo is not a Product

Many of us immersed in the world of consumer tech become quite excited when we see something new for the first time. Our imagination immediately races ahead to try to understand how we’ll use it and what products we’ll buy.

But our imagination rarely is tempered by the actual time it can take to turn a new technology into a product. We get ahead of ourselves with predictions about the impact that the technology will have and how it will change our lives. But from all my experience, it always takes much longer than expected.

Our excitement often leads to unreasonable expectations, impatience, and disappointment once the product finally arrives. The product is often less than we expected, and it may take several iterations before it does.

The time it takes for a new technology concept to become mainstream is measured in years or decades, rarely in months. Many things need to occur. There’s the time needed for development of the product, the time it takes to create awareness in the market, and the time for people to realize they have a need. Even then, a buying decision can take years more.

The world is not composed of people like us that are early adopters and can’t wait for the next new thing.  Most can wait and usually do. Sometimes years. There are many reasons for this, from not understanding the new technology, being cautious and skeptical about its value to them, being intimidated, not being able to afford it, or just not caring.

The table below shows just how long it took with other products.  The CD player and VCR, as examples, took ten years to reach a fifty percent penetration of US homes.

We can argue that with social media, the speed of information, and a technically more proficient population, adoption might move faster today. But our expectations are now higher, we’re more skeptical, and it often takes more to impress.

That hasn’t stopped companies’ efforts to get us excited about their new tech. We’re being inundated with news every day. Examples are self-driving cars – even some that fly, artificial intelligence, and virtual and augmented reality.

Much of the news is promoted by the companies themselves to raise investments, increase their valuation, or to scare away their competitors, all while exaggerating the time to commercialization

Just last week Uber announced an investment in a company developing flying cars. It played well on the national news that quoted a company official that they roll out a network of flying cars in Dallas by 2020. Last year Uber said they were already employing self-driving cars, when, in fact, they still have one or two employees in each car. Two years ago, Amazon demonstrated drone delivery. Yet these technologies are still years away.

Today it’s hard to open Facebook or a technology blog without seeing examples of virtual and augmented reality. We’re seeing demos from scores of companies around the world, each vying for moments of fame. We see all sorts of clever uses of how this technology will help us in education, medicine, shopping, and computing as if it’s just around the corner. Yet much of this will evolve slowly and take years to be significant.

If the past is any indication, the first-generation products will not be commercially successful, but more of a proof of concept. No one will wear huge goggles outside of their home. Enabling technologies still need to be developed, including smaller components, miniaturized optics, and faster processors to enable these devices to be practical. More importantly, new tools and an infrastructure are needed for creating affordable content.

Yes, we’ll see some small examples when we point our phone at a restaurant or a product and see reviews and can buy with a click. Tim Bajarin correctly pointed out in this piece that he doesn’t expect to see VR adopted widely for at least 5-7 years.

The point here is not to be discouraging about innovation, but to realize that it’s a long and difficult road from a prototype or demo to a successful product. The idea is always the easy part.

Apple watchOS 4 brings Intelligence to the Wrist

There was a lot unveiled during the Apple WWDC keynote last week and, as to be expected, some of the hotter and bigger products stole the limelight and relegated others to be simply an extra in the over two-hour-long production. watchOS 4 might not have seemed significant, especially to those who have been so eagerly calling Apple Watch a failure, but I saw it as one of the best examples of how Apple sees the future.

The wearable market remains a challenging market for most vendors. According to IDC, sales in the first quarter of 2017 saw Apple and Xiaomi sharing the number one position with volumes of 3.6 million units. While volumes are the same, it is when you look at average selling price (ASP) for these two brands that the real issue with the wearable markets surfaces. Apple controls the high-end of the market and Xiaomi the lower end. In between, Fitbit is losing ground and failing to move ASP up.

Delivering a clear value continues to be key in convincing consumers that wearables have a role to play and for now that value for mainstream consumers remain health and fitness.

There is More Value in a Coach than a Tracker

Since Apple Watch 2, Apple has been focusing on fitness and the release of watchOS 4 builds on it by adding to the Workout app support for the highly popular High-Intensity Interval Training, an autoset for pool swim workouts and the ability to switch and combine multiple workout types.

Apple is also attempting to turn Apple Watch into more of an active coach than a simple tracker. This might seem like a subtle differentiation, but if implemented right it could actually drive engagement and loyalty. Tracking, while clearly useful, has more a passive role and one that some users might think could be taken on by other devices. Turning Apple Watch more into a coach through daily inspiration, evening push and monthly challenges deepens the relationship a user has with the device. Delivering suggestions on how to close the circles, praising the goals achieved thus far and pushing to achieve more can make users feel that Apple Watch is more an active driver of their success which in turns increases the value they see in it.

The new GymKit which allows gym equipment to sync with Apple Watch might take a while to materialize given the required updated hardware roll out by key brands such as LifeFitness, TechnoGym StairMaster, etc. but it makes sure Apple is not losing sight of critical data. Today, some users might just rely on the gym equipment rather than their Apple Watch due to the duplication of functionalities which leaves Apple Watch missing out on valuable data to which Apple and other apps could otherwise have access to. GymKit puts Apple Watch right at the center of our fitness regime. Apple Watch talking to gym equipment via NFC also makes me believe that more devices will come in the future. Think about having your gym membership card or your hotel room card on your watch rather than having to carry a physical card.

Reinforcing the Strong Pairing of Apple Watch + AirPods

I talked about the magic that Apple Watch + AirPods can deliver to users before and I remain a believer. In a similar way to HomePod, music on the Apple Watch is the easiest way to appreciate Siri as well as the combo with AirPods. With watchOS 4, Apple is making it simpler to get to the music you want for your workout thanks to a new multi-playlist support and automatic import.

Apple also introduced the new Siri face that makes Apple Watch much more context-aware by delivering information that is relevant to you at a specific moment in time. While Apple did not talk about it, one could see how that Siri Watch face could integrate very well with voice when you are wearing AirPods. Siri could, for instance, tell you that you need to leave for your meeting while showing you the calendar appointment on Apple Watch.

So, as Apple Watch becomes more like a coach, Siri becomes more a visible but discreet assistant that is being liberated from the iPhone. I think this is a very powerful paradigm and before nay-sayers jump to point out that Apple Watch penetration is limited, I underline that Apple Watch users are highly engaged in the Apple ecosystem and represent Siri’s best opportunity. Similar to CarPlay, Apple Watch also has a captivated audience not just for Siri’s brains but also for voice-first. With Apple Watch, voice interaction is the most natural form of interaction, especially when wearing AirPods. So much so that, with watchOS 4, SiriKit adds support for apps that are used to take notes, so that now you can use Siri on Apple Watch to make changes in any note-taking app.

Smarter Watch, Smarter Apps

Some Apple Watch critics have used the news that circulated last month that Google, Amazon, and eBay were killing support of their Apple Watch apps as evidence that Apple Watch failed. The reality is, however, as I explained numerous times, that Apple Watch cannot be seen as an iPhone on your wrist and therefore its success will not be driven nor defined by the same enablers.

Don’t get me wrong, there is certainly a place for apps to play, but context is going to be much more important than it has been so far on the iPhone or the iPad. This is why I believe Apple’s latest watchOS will help in making apps not just faster and smoother to run but much more relevant for users.

First, there will be a single process that runs the app’s UI elements and code. This helps with speed and responsiveness and means developers do not need to change their code. Access to Core Bluetooth will allow apps to bypass the iPhone and connect directly to Apple Watch so that data is transmitted faster between Apple Watch and an accessory for instance. Apple also increased the number of app categories that can run in background mode like for example, navigation apps.

While it will be up to developers to think differently when it comes to delivering apps for Apple Watch, I believe Apple has given them a much easier tool set to succeed.

Apple Watch and its Role in Ambient Computing

HomePod was the sexy hot product that everybody paid attention to and ambient computing is the buzzword of choice at the moment. Both extremely relevant in how one should think about home computing and even office computing, to be honest. It is easy for me to see the role that Apple Watch can play in helping me navigate my ambient computing network in a personal and highly relevant way. It is early days, but Apple has laid the foundation!

Computing Evolves from Outside In to Inside Out

Sometimes, the most radical changes come from simply adjusting your perspective.

In the case of computing and the devices we spend so much of our time on, that perspective has almost always been from the outside, where we look into the digital world that smartphones, PCs, and other devices essentially create for our viewing pleasure.

But, we’re on the cusp of one of the most profound changes in how people interact with computers in some time. Why, you ask? Because now, those devices are incorporating data from the real-world around us, and enabling us to see an enhanced version of the outside world from the inside out. In a sense, we’re going from digital data inside to digitally-enhanced reality on the outside.

The most obvious example of this phenomenon is augmented reality (AR), which can overlay internally created digital images onto our devices’ camera inputs from the real world and create a mixed reality combination. In truth, the computer vision technology at the heart of AR has applications in many other fields as well—notably for autonomous driving—and all of them involve integrating real-world data into the digital domain, processing that data, and then generating real-world outcomes that we can physically see, or otherwise experience. However, this phenomena of inside out computing goes way beyond that.

All the sensor data that devices are collecting from the simultaneously profound and meaningless concept of the Internet of Things (IoT) is giving us a whole new perspective on the world, our devices, and even the people around us. From accelerometers and gyroscopes in our smartphones, to microphones in our smart speakers, to vibration sensors on machines, there’s a staggering amount of data that’s being collected, analyzed, and then used to generate information and, in many cases, actions on our behalf.

The process basically involves measuring various aspects of the physical world, converting those measurements into data, computing results from that data, incorporating that data into algorithms or other programs designed to react to them, and then generating the appropriate result or action.

This is where several other key new concepts come together in this new inside-out view of computing. Specifically, machine learning (ML) and artificial intelligence (AI) are at the heart of many of these new data processing algorithms. Though there are many types of ML and AI, in many cases they are focused on finding patterns and other types of logical connections in the data.

In the real world, this means that these algorithms can do things like examine real-world images, our calendar, our documents, the music we listen to, etc., and convert that “input” into more meaningful and contextual information about the world around us. It helps determine, for example, where we should go, what we should eat, who we should meet—the permutations are staggering.

Most importantly, the real-world data that our devices can now collect or get access to can then be used to “train” these algorithms to learn about what we do, where we are, what we like, etc. At its heart, this is what the concept of ambient computing—which is essentially another way to talk about this inside-out computing model—is all about.

As different and distinct as the many technologies I’ve discussed may first appear, they all share this outward projection of computing into the real world. This is a profoundly different, profoundly more personal, and profoundly more valuable type of computing than we’ve ever had before. It’s what makes the future of computing and AI and IoT and AR and all of these components of “contextual computing” so exciting—and so scary.

Never before have we really seen or experienced this extension of the digital world into our analog lives as intensely as we are now starting to see. Sure, there have been a few aspects of it here or there in the past, but we’re clearly entering into a very different type of computing future that’s bound to give all of us a very different perspective.

AMD and Intel Race Towards High Core Count CPU Future

As we prepare for a surprisingly robust summer season of new hardware technologies to be released to the consumer, both Intel and AMD have moved in a direction that both seems inevitable and wildly premature. The announcement and pending introduction of high core count processors, those with many cores that share each company’s most modern architecture and design, brings with it an interesting combination of opportunity and discussion. First and foremost, is there a legitimate need for this type of computing horsepower, in this form factor, and secondly, is this something that consumers will want to purchase?

To be clear, massive core count CPUs have existed for some time but in the server and enterprise markets. Intel’s Xeon line of products have breached the 20-core count in previous generations and if you want to dive into Xeon Phi, a chip that uses older, smaller cores, you will find options with over 70 cores. Important for applications that require a significant amount of multi-threading or virtualization, these were expensive. Very expensive – crossing into the $9000 mark.

What Intel and AMD have begun is a move to bring these high core count products to consumers at more reasonable price points. AMD announced Threadripper as part of its Ryzen brand at its financial analyst day, with core counts as high as 16 and thread counts of 32 thanks to SMT. Then at Computex in Taipei, Intel one-upped AMD with its intent to bring an 18-core/36-thread Skylake-X CPU to the new Core i9 lineup. Both are drastic increases over the current consumer landscape that previously capped out at 10-cores for Intel and 8-cores for AMD.

Let’s first address the need for such a product in the world of computing today. There are many workloads that benefit easily from multi-threading and consumers and prosumers that focus in areas of video production, 3D rendering/modeling, and virtualization will find single socket designs with 16 or 18 cores improve performance and scalability without forcing a move to a rackmount server infrastructure. Video encoding and transcoding has long become the flagship workload to demonstrate the power of many-core processors. AMD used that, along with 3D rendering workloads in applications like Blender, to demonstrate the advantages of its 8-core Ryzen 7 processors in the build up to their release.

Other workloads like general productivity applications, audio development, and even PC gaming, are impacted less by the massive core quantity increases. And in fact, any application that is heavily dependent on single threaded performance may see a decrease in overall performance on these processors as Intel and AMD adjust clock speeds down to fit these new parts into some semblance of a reasonable TDP.

The truth is that hardware and software are constantly in a circular pattern of development – one cannot be fully utilized without the other. For many years, consumer processors were stuck mostly in a quad-core rut, after an accelerated move to it from the single core architecture days. The lack of higher core count processors let software developers get lazy with code and design, letting the operating system handle the majority of threading operations. Once many-core designs are the norm, we should see software evolve to take advantage of it, much as we do in the graphics market with higher performance GPUs pushing designers forward. This will lead to better utilization of the hardware being released this year and pave the road for better optimization for all application types and workloads.

From a production standpoint Intel has the upper hand, should it chose to utilize it. With a library of Xeon parts built for enterprise markets already slated for release this year and in 2018, the company could easily bring those parts to consumers as part of the X299 platform rollout. Pre-built, pre-designed and pre-validated, the Xeon family were already being cannibalized for high-end consumer processors in previous generations, but Intel capped its migration in order to preserve the higher prices and margins of the Xeon portfolio. Even at $1700 for the 10-core 6950X processor, Intel was discounting dramatically compared to the Xeon counterpart.

Similarly, AMD is utilizing its EPYC server product line for the Threadripper processors targeting the high-end consumer market. But, AMD doesn’t have large market share of workstation or server customers to be concerned about cannibalization. To them, a sale is a sale, and any Ryzen or Threadripper or EPYC sold is an improvement to the company’s bottom line. It would surprise no one if AMD again took an aggressive stance on pricing its many-core consumer processors, allowing the workstation and consumer markets to blend at the top. Gaining market share has taken precedent over margins for AMD; it started as the initiative for the Polaris GPU architecture and I think it continues with Threadripper.

These platforms will need to prove their value in the face of dramatic platform requirements. Both processor vendors are going to ship the top performing parts with a 165-watt TDP, nearly double that of the Ryzen and Kaby Lake desktop designs in the mainstream market. This requires added complexity for cooling and power delivery on the motherboard. Intel has muddied the waters on its offering by varying the number of PCI Express lanes available and offering a particular set of processors with just four cores, half the memory channels and 16 lanes of PCIe, forcing platforms into convoluted solutions. AMD announced last week that all Threadripper processors would have the same 64 lanes of PCIe and quad-channel memory support, simplifying the infrastructure.

With that knowledge and assumption in place, is higher core count processing something that the consumer has been asking for? Is it just a solution without a problem? The truth is that desktop computers (and notebooks by association) have been stuck at 4-cores in the mainstream markets for several years, and some would argue artificially so. Intel, without provocation from competent competing hardware from AMD, has seen little reason to lower margins at the expense of added performance and capability in its Core line. Even the HEDT market, commonly referred to as the E-series (Broadwell-E, Ivy Bridge-E and now Skylake-X) was stagnant at 8-cores for longer than was likely necessary. The 10-core option Intel released last year seemed like an empty response, criticized as much for its price ($1700) then praised for its multi-threaded performance.

AMD saw the opportunity and released Ryzen 7 to the market this year, at mainstream prices, with double the core count of Intel Core parts in the sub-$400 category. The result has been a waterfall of an effect that leads to where we are today.

Yes, consumers have been asking for higher core processors at lower prices than they are currently available. Now it seems they will have them, from both Intel and AMD. But pricing and performance will have the final say on which product line garners the most attention.

Podcast: Apple WWDC 2017

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing the many announcements from Apple’s Worldwide Developer Conference including new iPad and Mac hardware, ARKit, Siri, HomePod, iOS 11 and more.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Apple Shows its AR Cards, Offers Developers Sizeable Opportunity

As expected, at WWDC this week Apple unveiled its plans to support augmented reality on iOS devices, providing developers with software tools to create AR apps through its new ARKit. As an analyst who’s been covering the AR space for several years, I was quite happy to see Apple not only enter the space but do so in an aggressive way by adding support for the technology in the upcoming iOS 11 and across a wide swath of existing devices.

Apple Brings Scale to AR
Apple isn’t the first major player to throw its hat into the mobile AR ring, but the scale of its developer community and the size of the supported hardware installed base means it has the potential to significantly accelerate the entire category. Apple didn’t announce iOS developer numbers at this year’s WWDC keynote, but at last year’s event CEO Tim Cook put the number of registered developers at roughly 13 million, and that number has undoubtedly increased since then.

Apple’s AR Kit uses what the company calls visual-inertial odometry (VIO) to track the world around it. The VIO technology brings together camera sensor data with CoreMotion data collected by other device sensors to understand how a device moves, allowing the software to place digital objects into the scene. Importantly, ARKit also uses the camera to estimate the amount of light in a scene and then applies the correct amount of light to the virtual object. This helps address one of the key complaints about AR so far, which is that even realistic-looking digital objects look fake when they don’t have the same lighting as the real objects in the room. I had an opportunity to see ARKit in action shortly after the keynote and the visuals were suitably impressive. And it’s early days. I’m convinced developers will do amazing things with the technology.

Apple will roll out support for ARKit when it launches iOS 11 later this year. ARKIt requires iOS hardware running either Apple’s A9 or A10 processors. On the iPhone side, this includes the 6S, 6S Plus, SE, 7, and 7 Plus. On the iPad, this includes the recently launches iPad (2017) and all versions of the iPad Pro, including the first-generation 9.7- and 12.9-inch products and the new 10.5- and 12.9-inch products announced at WWDC. By not restricting support to just brand new products, Apple has guaranteed developers a substantial opportunity numbering in the hundreds of millions right out of the gate.

Existing Mobile AR Developer Kits
Prior to Monday’s announcement, among the most-used software developer platforms for creating augmented reality experiences for mobile devices was PTC’s Vuforia. PTC estimates that there are 350,000 Vuforia developers actively creating content on its platform. Earlier this month, the company estimated that developers so far have created about 40,000 applications, used by both consumer and commercial end users.

Another primary software developer platform for mobile AR is Wikitude. In April, the company estimated that about 100,000 developers were using its platform to create AR apps. It estimated the total apps to be about 20,000 with close to 750 million installs.

Both Vuforia and Wikitude support developers creating apps for Apple’s iOS and Google’s Android. Vuforia also supports Microsoft’s Universal Windows Platform. It’s worth noting that they both also support head-mounted displays that support these operating systems, too. These products aren’t shipping in significant volumes, yet, but they’re clearly the end-game for apps designed for industry verticals and hands-free use.

Finally, as I’ve discussed before, Google has offered support for Mobile AR through its Android-based Tango platform. At present, there are just two announced Android phones that support Tango and one shipping into the market.

Apple’s entrance into AR will significantly increase the number of developers looking at the space. The benefit of ARKit over other platforms is that because Apple controls the OS and hardware, it can provide a more customized and optimized experience than other developer kits. That said, I’d be willing to bet that companies such as PTC and Wikitude don’t see Apple’s entrance as a bad thing, but one that will effectively “raise all boats.” One of the other benefits of bringing Apple’s huge developer base is that we’ll soon start to see even more unique ideas forming around the use of augmented reality. I would go so far as to say that the true killer app for mobile AR doesn’t yet exist.

AR as Apple Hardware Refresh Driver
As noted, Apple is supporting ARKit on iOS 11 devices that support the A9 and A10 chips. While this includes a sizeable chunk of the iPhones in the world, it’s a much smaller subset of the iPads out there. At WWDC Apple announced a long list of iPad-specific updates to iOS that may well jumpstart interest in Apple’s tablet (and which merit a separate future column). Mobile AR, however, may end up being the new technology that drives a sizeable percentage of consumers to finally upgrade from their aging iPads.

Equally interesting is the question of whether Apple’s next-generation iPhones—expected later this year—will ship with additional AR capabilities beyond what’s available on today’s products. At present, ARKit is utilizing just the primary device camera and not the second camera currently available on the iPhone 7 Plus. It is reasonable to assume that future iterations of the phone will also have multiple cameras. Use of multiple cameras can lead to better AR experiences (Google Tango phones utilize three). Might Apple bifurcate the iOS AR experience later this year, potentially offering a better and best experience for buyers of its newest flagship phone?

The next few months will be crucial to Apple’s AR ambitions gaining traction with developers as the company heads into its fall iPhone announcements. I fully expect to hear Tim Cook announce during that event the number of AR-enabled apps already available in the iOS App Store. And the success of developers in creating AR apps for the iPhone and iPad will naturally lead to what logically comes next: a head worn AR product from Apple.

Apple’s Very Different Approaches to VR and AR

This week saw the culmination of what I think of as developer season, as Apple held the last of the big developer conferences of the year, following earlier events from Facebook, Microsoft, and Google. Augmented and virtual reality were themes at each of those earlier conferences, as I outlined for subscribers on Monday, but Apple had been silent on each of these two big areas until now at its public events. Monday’s keynote, though, saw big announcements around both these areas, giving Apple a role in AR and VR, and yet its approach to these two markets is very different, and it’s worth looking at how and why.

The State of the Market

As I wrote on Monday, VR and AR were big themes at the other three big developer conferences, with more time devoted to those than any other topic at F8 and I/O and significant time given over to them at Build too. These are hot topics, as is the whole question of whether AR and VR are even the right names, with Microsoft preferring to talk about Mixed Reality (and in the process arguably sending some mixed messages), and Google’s VR lead talking about Immersive Computing, with both arguing that there’s a spectrum here.

The reality is that AR and VR as terms are the only ones many mainstream consumers can relate to, so for all the hand wringing about terminology and the attempts to introduce new terms into the lexicon, those are the terms worth using. But even within each of those two overarching categories, there are at least two sub-categories each. On the VR side, the interface – a headset – is the same across the board, but there are significant differences in price and performance between the PC and console variants on the one hand and the mobile flavors on the other. On the AR side, meanwhile, the separation is the interface itself – headsets on the one hand, and smartphones on the other. The diagram below outlines how I think about these in terms of the short-to-medium term addressable markets (think 2-3 years out), and it’s worth noting the chart is not to scale but rather shows relative differences only:

To summarize, AR sits at the two ends of the spectrum today, while VR sits in the middle. Headset-based AR is in its infancy today, with Microsoft’s HoloLens one of the few commercial products launched into the market, but very much a niche proposition and far from being a consumer product, while Magic Leap’s technology may well be the first significant consumer-grade product later this year. In the VR realm, console and PC-based VR provides the best and most powerful experiences today, but is tied to the relatively small installed bases of high-end gaming PCs and consoles. As such, Sony is the market leader with the one million sales it announced this week, but that’s far fewer than the 5 million Gear VR headsets that had been sold back in January by Samsung, and which also incorporates Google’s Cardboard and Daydream platforms and a number of others. But even these pale in comparison with smartphone-based AR, which even prior to this week was available on hundreds of millions of smartphones running Snapchat, Facebook, Instagram, Pokemon Go, and a variety of other apps with AR features.

Smartphone-based AR is therefore the one mainstream value proposition among a set of mostly niche and small markets today, and that’s therefore where you’d expect many of the major companies to be putting their money and placing product bets. And of course we saw some of that at Facebook’s F8, with a big investment in its vision of AR, which is absolutely smartphone-centric in the short term but leaves room for a roadmap around eventual glasses-based AR down the line. Microsoft and Google, though, remain very focused on other aspects, with Google’s biggest bets on Daydream VR in both mobile and standalone varieties, and only a side bet on Tango smartphone-based AR, which is available in so few phone models as to be basically irrelevant as a consumer value proposition, while Microsoft is exclusively focused on headset-based experiences across the board. (It’s worth noting that the Daydream standalone VR experience doesn’t have a slot on my chart above, but would form a third VR category, though it’s uncertain where it will eventually fit around the other two in terms of size – much depends on the price/performance ratio we see in devices later this year).

Apple’s Entries into VR and AR

Now, along comes Apple’s entry into VR and AR, which are quite different not only from each other but from what we’ve seen from the other major companies already playing in each of these markets. For context, it’s worth noting that despite the lack of official comment at Apple’s own events, Tim Cook hasn’t been shy about articulating a vision in which AR is far more appealing than VR, and that certainly seems to have informed its announcements this week.

VR – Creation and the Mac

First off, on the VR side, Apple is clearly committed to being a player here, but almost entirely on the content creation side. With the enhancements to Metal and other elements of macOS and some accompanying new hardware options, Apple is attempting to enable developers and other content creators to work on VR and other immersive experiences using Macs. It’s explicitly supporting 360° video and 3D creation on new Macs running High Sierra, and has partnered with a number of both hardware and software companies to build a set of tools for creating and testing VR content.

But in all the announcements about VR this week, Apple stopped short of promising that Macs would become an important platform for consuming VR content, and it said nothing about VR on iPhones – there’s no VRKit for third party headset manufacturers to work with or anything of that nature. This is almost entirely a Mac and creation-centric approach to VR from Apple. And that shouldn’t surprise us given those remarks from Tim Cook, which have downplayed VR as a mainstream technology. Apple understands that many of its developers and users of apps like Final Cut Pro want to be able to create immersive content on Macs, but it isn’t yet ready to commit to supporting the actual end user experiences in a big way across its platforms. That means it’ll be able to hold onto some developers and creators that might otherwise have abandoned the Mac as a platform, but it’s unlikely to do much to dispel the notion that Macs are poor devices for hardcore or VR gaming. And the entirety of Apple’s announcements here were around the Mac, a platform that has an order of magnitude fewer users than iOS.

AR – Consumption and the iPhone

By contrast, Apple’s AR announcements were all about its biggest platform and the end user. Given that the smartphone AR space today is dominated by photo and video filters and lenses, two possible entry points for Apple would have been logical: either adding its own lenses and filters to the Camera and Photos app in iOS, or opening up the ability for third party developers to create them instead (or both). Instead, what we have is a much more expansive vision for smartphone-based AR from Apple, opening up ARKit as a framework for AR in any conceivable context on the iPhone and iPad. So yes, I’ve no doubt we’ll see photo and video lenses and filters that make use of the new AR tools, but we’ll also see much more. Apple demoed several games at WWDC, but it goes even beyond that, as a tweet from just one day after the keynote demonstrated.

Apple isn’t exaggerating when it says it’s just created the largest AR platform out there – though Facebook’s user base is about twice as big as Apple’s, Apple’s developer base is far larger and the monetization opportunities far clearer than around Facebook’s AR platform, which has no monetization options at all today. And Apple’s vision for AR today is far broader than anything we’ve seen from any other player, encompassing not just the very similar visions of Facebook and Snapchat, but also the Pokemon Go AR view and many as yet undiscovered implementations. And whereas Facebook only showed canned demos of much of its AR functionality on stage and released a more basic subset, Apple’s ARKit does much the same stuff in production today.

So yes, Apple showed us its first forays into both VR and AR at WWDC, but those first steps into each market look very different. Apple’s VR strategy is indicative of its desire to support creators and developers as they mostly build products for consumption on other platforms, while Apple’s AR bet is very much about supporting its own users on its own platforms. The latter is a vastly bigger market today than the former, and much better aligned with Apple’s existing strengths and its user base. That’s going to make it a big player in AR by the end of the year even as it takes much slower more subtle steps into VR.

And of course none of this closes the door to an eventual entry by Apple into that other flavor of AR, the headset market, or as I think it will actually be by the time Apple enters: the glasses-based variety. Everything it and its developers are learning and building today will be applicable to that eventual more immersive version of AR too.

Apple HomePod: A Speaker with the Bonus of Siri

On Monday, the most awaited and rumored device of Apple’s developer conference was finally announced as the last one thing of an over two-hour long keynote: HomePod.

A little later in the day, in a room that is probably as large as my family room at home, I had the opportunity to listen to HomePod and compare its performance to an Amazon Echo and a Sonos Play 3. I listened to five songs across the three devices: Sia’s “The Greatest,” “Sunrise” by Norah Jones, “Superstition” by Stevie Wonder, “DNA” by Kendrick Lamar and a live performance of The Eagles’ “Hotel California.” The sound coming from HomePod was crisper and the vocals clearer than the Sonos. The comparison with Echo was the harsher of the two. No matter where I stood in the room, the music sounded great. What I did not get to do was talk to Siri! Even the demo was run from an iPad which would imply there is Bluetooth support with HomePod.

The Advantage of Going Music First

On Stage, Phil Schiller said the HomePod will do to home music what iPod did to music overall. The iPod, of course, did a lot to music from a business model perspective but I do not think this is what Schiller was getting at. I believe the ‘reinventing home music’ comment is actually closer to what AirPods have done for wireless headsets. They created a more magical experience from pairing your phone, all the way to listening to music. HomePod delivers good quality sound without the added complexity of having to figure out where to position multiple speakers in a room to achieve that sound. The fact that HomePod understands where it is positioned in the room and whether or not it is paired with another HomePod so that the way the music is played changes dynamically takes all the burden away from the user.

By focusing on music first, Apple straight away opens up the addressable market to a much broader segment than what a smart speaker would do. There are more people our there interested in buying or replacing their speakers that care about good sound quality than there are wanting a smart speaker that delivers ok sound.

While early tech adopters might find it easy to invest in a speaker to get access to an assistant the price that they are willing to pay for it has been set by Amazon and Google and so far it has not gone past $249. Beyond early adopters justifying the investment gets a little more tricky if the core value is put on the assistant. Nobody would question quality sound, however. And even if the assistant turns out not to be that key for you, you would not be regretting the purchase. That is a smart move when you think that Apple not only knows music but knows hardware.

Siri as a Specialist to Build Trust

During the keynote, Apple was much more intentional with how it described Siri beyond voice. As the different presenters talked about Machine Learning and Artificial Intelligence Siri clearly emerged as a brain, not just a voice.

When it comes to HomePod, Siri becomes a musicologist that will be able to understand my music taste and preferences and deliver the perfect playlist just when I ask “Siri play some music I like.” Determine what music to play in relation to taste, possibly mood and time of the day does not seem particularly difficult which would give Siri a high chance to get it right. This accuracy is going to build confidence in the user who will likely increase usage and trust Siri for other things over time.

Opening too much too soon when it comes to APIs, however, could spoil that experience and Apple is not willing to take that risks. The number of things you can do with an intelligent speaker or any device that is linked to a digital assistant is not, in my opinion, what truly matters.

Alexa has over 11 thousand skills but how many are regularly used in a way that makes an impact on the user life. In a way, skills are the new apps. The number game works for a while but what it will boil down to is what skills hook me on the device. Everybody is going to be a little different. For me, my Alexa morning briefings and traffic alerts have become a part of my morning routine.

The number of devices that will be able to integrate an assistant is also not the most important thing in the overall experience. Just because you can integrate an assistant in a fridge or a washing machine it does not mean that you should. Voice UI and assistant are two different things. Will I want to control my washing machine with voice? Sure. Will I want for my assistant to be in my washing machine? No!

Curating the experience so early in the game is important. Our data shows that consumers who tried a digital assistant a few times and did not get the answer or the task they wanted gave up and never tried again. Getting disappointed users to try again is harder than getting consumers to try in the first place.

Don’t draw Conclusions on Why We did not get to interact with Siri

Apple said that HomePod is a hub to control Home when out and about. It also said that Siri in HomePod can do the same things “she” does in the iPhone or the iPad. There are a lot of questions that do not have answers today: will HomePod be able to recognize and differentiate users when it is associated with a Music family account? Will HomePod connect to my Apple TV? Will I be able to stream other music services other than Apple Music? Will there be a developer kit?

When it comes to Siri, I would urge not to conclude that the Siri we know today will be the same as what we will discover once HomePod hits the market. We are aware that iOS 11 will bring enhancements to Siri. Aside from a new voice, and more context that will be used to suggest answers to follow-up questions, Siri will now support translation from English into Chinese, German, Italian, French and Spanish with more to come later. Siri will also be able to provide bank account summaries, balance transfers and support third-party note-taking.

I believe that the reason why we did not get to interact with Siri as part of the demo is that the experience will be very different, but there is, of course, more work to be done otherwise HomePod will be shipping now! By letting HomePod out of the bag, Apple made sure that people in the market for a speaker did not rush to get what is available on the market today.

Slow and Steady wins the Race

As much as we like to talk about who is ahead and who is behind, the reality is that the smart speaker and digital assistant market are still at the start of a long opportunity and Apple is still right in the game. While Siri might not come across as smart as Alexa and Google Assistant “she” has been learning consumers’ preferences, habits, behaviors for years now and doing that across over thirty countries albeit with different skills. Apple mentioned that Siri is used monthly on 375 million devices. This reach is a significant advantage and maybe the primary source of the discontent that some feel right now. With Siri having this kind of advantage why are we not seeing more? Well, I think we are about to see more!

The Overlooked Surprises of Apple’s WWDC Keynote

For some, Apple’s WWDC keynote event went liked they hoped, with the company introducing some exciting new products or technologies that hit all the sweet spots in today’s dramatically reshaped tech environment. Augmented reality (AR), artificial intelligence, smart speakers, digital assistants, convolutional neural networks, machine learning and computer vision were all mentioned in some way, shape or form during the address.

For others, the event went like they expected, with Apple delivering on virtually all the big rumors they were “supposed” to meet: updated Macs and iPads, a platform for building AR apps on iOS devices, and a Siri-driven smart speaker.

For me, the event was a satisfying affirmation that the company has not fallen behind its many competitors, and is working on products and platforms that take advantage of the most interesting and potentially exciting new technologies across hardware, software and services that we’ve seen for some time. In addition, they laid the groundwork for ongoing advancements in overall contextual intelligence, which will likely be a critical distinction across digital assistants for some time to come.

Part of the reason for my viewpoint is that there were several interesting, though perhaps a bit subtle, surprises sprinkled throughout the event. Some of the biggest were around Siri, which a few people pointed out didn’t really get much direct attention and focus in the presentation.

However, Apple described several enhancements to Siri that are intended to make it more aware of where you are, what you’re doing, and knowing what things you care about. Most importantly, a lot of this AI- or machine learning-based work is going to happen directly on iOS devices. Just last year, Apple caught grief for talking about differential privacy and the ability to do machine learning on an iPhone because the general thinking then was that you could only do that kind of work by collecting massive amounts of data and performing analysis in large data centers.

Now, a year later, the thinking around device-based AI has done a 180 and there’s increasing talk about being able to do both inferencing and learning—two key aspects of machine learning—on client devices. Apple didn’t mention differential privacy this year, but they did highlight that by doing a lot of this AI/machine learning work on the device, they can keep people’s information local and not have to send it up to large cloud-based datacenters. Not everyone will grasp this subtlety, but for those who do care a lot about privacy, it’s a big advantage for Apple.

On a completely different front, some of Apple’s hardware updates, particularly around the Mac, highlight how serious they’ve once again become about computing. Not only did they successfully catch up to many of their PC brethren, they were demoing new kinds of computing architectures—such as Thunderbolt attached external graphics for notebooks—that very few PC companies have explored. In addition, bringing 10-bit color displays to mainstream iMacs is a subtle, but critical distinction for driving higher-quality computing experiences.

On the less positive front, there are some key questions on the detailed aspects of the HomePod’s audio processing. To be fair, I did not get to hear an audio demo, but conceptually, the idea of doing fairly major processing on a mono speaker of audio that was already significantly processed to sound a certain way on stereo speakers during its creation strikes me as a bit challenging. Yes, some songs may sound pleasing, but for true audiophiles who actually want to hear what the artist and producer intended, Apple’s positioning of the HomePod as a super high-quality speaker is going to be a very tough sell.

Of course, the real question with HomePod will be how good of a Siri experience it can deliver. Though it’s several months from shipping, I was a bit surprised there weren’t more demos of interactions with Siri on the HomePod. If that doesn’t work well, the extra audio enhancements won’t be enough to keep the product competitive in what is bound to be a rapidly evolving smart speaker market.

The real challenge for Apple and other major tech companies moving forward is that many of the enhancements and capabilities they’re going to introduce over the next several years are likely to be a lot subtler refinements of existing products or services. In fact, I’ve seen and heard some say that’s what they felt about this year’s WWDC keynote. Things like making smart assistants smarter and digital speakers more accurate require a lot of difficult engineering work that few people can really appreciate. Similarly, while AI and machine learning sound like exotic, exciting technological breakthroughs, their real-world benefits should actually be subtle, but practical extensions to things like contextual intelligence, which is a difficult message to deliver.

If Apple can successfully do so, that will be yet another surprise outcome of this year’s WWDC.

How Maker Movement Makes Creative Dreams Come True

For the fourth year in a row, I took the train from San Jose to the San Mateo, CA Event Center in mid May where the grandaddy of Maker Faires took place. Over 125,000 people trekked to the show to check out all types of products, maker ideas and related services and to attend various sessions to help kids and adults alike become makers.

The Maker Movement started out small some 12 years ago and had more of a tech focus driven by kids interest in things like robotics, electrical circuitry and making one’s own electronic gadgets such as a PC and creating things like motors to drive all types of devices such as small miniature cars, small trains, robots, etc.

But over time, and thanks to the Maker Magazine and Maker Faires around the world, the Maker movement has gained great steam and its emphasis on getting people to make things spans everything from bee keeping, quilting, and hydroponics to full blown make it yourself electronic kits and tools, 3 D printers, wood and metal etching and shaping tools to creating robots, drones, mechanical engines and much more.

Maker Faire’s have drawn major attention from many companies such as Intel Microsoft, Google, Avnet, Cognizant, Kickstarter, IBM, Oracle and dozens of others. They understand that many of their employees in the future may come from the ranks of kids coming to Maker Faire’s today who catch the bug and could eventually become tomorrow’s scientists, electrical engineers, coders and people who can make things and get things done.

I first wrote about Maker Faires in 2014 where I explained why the Maker Movement is important to America’s Future.

After attending last year’s Maker Faire I wrote about why Maker Faire’s are so important for our kids and stated that “The Maker Faires’ true importance lies in its focus on getting kids interested in making things. Over the last few years, I have written multiple pieces on STEM focusing on how companies around the world are backing STEM-based programs. All of them see how important these disciplines will be in the future. Still more germane to them is the real concern that if we cannot get kids trained in the sciences, we will not have the engineers and scientists to run our companies in the future.”
http://time.com/4344680/maker-faires/

Although the Maker Faire delights people of all ages who attend the event, the greatest enthusiasm and joy from encountering and inspiring all to create and make things of their own can really be seen on the faces of the kids at the show. As they go from booth to booth and area to area to see all of the exhibits, models and electronic tools and kits that can be used by them to make their own creative dreams come true, their smiling faces and excitement is contagious.

While the show attracts a lot of boys, I also saw many girls at this year’s show and can see a rise in their attendance year upon year as this show strives to be highly inclusive and attract people of all ages, genders, and ethnicities.

At this years show crowds went to see swimming drones, a bunny robot, a robotic giraffe, an all-electric Porsche 911 and the Microsoft coding booth was packed with kids checking out new ways to learn to code. This year’s show also had a VR slant as Microsoft’s booth had a demo of Hololens and HTC had a small tent where all could see how HTC’s VIVE VR goggles worked. Also, Google’s “soldering booth” where kids can learn to solder electronic connections is always a big hit at the Faire. One of the other hottest areas is where they had the Drone races.

What is really interesting about the Maker Faire is that technology is not presented as math and science per se but is shown in highly entertaining ways that channel the underlying role science and technology plays in the creation of all types of products, devices, and related services.

While the Maker Faire itself is fun and educational, its reason to exist is extremely important. For many of these kids, attending the Maker Faire introduces them to Science, Technology, Engineering and Math (STEM) and in many cases the Arts (STEAM.) These skills and disciplines are important to America’s growth as technology will have a dramatic impact on all types of industries and jobs in our future. Millions of the youth of today will need to have many of these STEM and STEM-related skills in order to get work and become the leaders in our corporations over time and will be next inventors and innovators of the future. For many of them, the Maker Movement and Maker Faire’s could be the catalyst that helps them garner the type of interest in these types of skills and steer them towards an educational path that prepares them for many of the jobs of the future.

The next Flagship Faire is in New York – World Maker Faire, September 23 & 24, 2017 at the New York Hall of Science.
For a list of other Maker Faire’s around the US and the world, check it out here.

Podcast: AR and VR, Essential Phone, Apple WWDC Preview

This week’s Tech.pinions podcast features Ben Bajarin, Jan Dawson and Bob O’Donnell discussing developments in augmented reality and virtual reality from the AWE Expo, analyzing the announcements from Andy Rubin’s Essential, and offering a preview of next week’s Apple Worldwide Developer Conference.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

My Wish List for WWDC 2017

Next week is Apple’s Worldwide Developers Conference and, along with the rest of the Tech.pinions crew, I’ll be watching the announcements closely. At this late stage, it makes little sense to try to predict what we’ll see next week – there are a number of credible reports out there about at least some aspects of what’s on tap and we’ll know the rest soon enough.

Instead, here are some of the things I’m looking for as both a user of Apple’s products and services and as an industry observer who wants to see the companies in this space keep pushing those products and services forward. I’ve done similar exercises twice in the past, in 2014 and 2016, in case you’d like to see how those turned out.

Allow iOS to Continue to Evolve to Meet Disparate Needs

I’ve called in the past for the creation of a “padOS” – a variant of iOS focused on either iPads in general or the iPad Pro specifically. The naming and separation is largely symbolic and whether or not Apple does it is less important than that it allows iOS to continue to evolve separately in its iPhone and iPad versions. Apple’s current big push around the iPad is positioning it as a fully-fledged computer that can be used for advanced productivity tasks. That means both its role and the way it functions has to evolve separately from those of the iPhone. The iPad needs to support more sophisticated multitasking, a different home screen layout, and more advanced apps than the iPhone. Apple needs to set apart the iPad version of iOS more clearly for developers so they catch Apple’s vision and believe they can create not just great experiences but great business models around apps on the iPad.

The challenge for Apple is it has to allow iOS on the iPad to evolve in such a way it doesn’t break the core value propositions of focus and simplicity which have always characterized the OS and the devices it runs on. It would be easy to say Apple should simply either port macOS to an iPad form factor or make a touchscreen Mac, but neither would be the right solution. Apple has to strike a careful balancing act between enhancing the power and functionality of the iPad without making it seem less like an iPad.

When it comes to the iPhone, I want to see what Apple can do in augmented reality, which has already been a theme in all three of the previous big developer conferences this year. With dual cameras in the iPhone 7 Plus and arguably the most used cameras in the world, Apple is in a unique position to do interesting things with AR in the native camera app. That’s still where many of us take our pictures, even if they’re subsequently exported to social media apps for sharing. So Apple has a lot of power to combine software and hardware optimization to provide interesting overlays on photos and videos and open those capabilities up to developers. I would guess this might start with lenses and filters but there’s so much more developers could invent and create, given the right tools.

Siri needs to Evolve Faster

Siri was a major focus last year but not necessarily in the ways I’d hoped or expected. Siri extensions were a great new feature and were complemented by extensions across Maps and iMessage. But Apple did relatively little to move Siri forward as a standalone voice assistant, touting relatively few advances in voice recognition, natural language processing, or its own ability to serve up more relevant responses to queries it properly recognizes and processes. I’d like to see Siri as a voice assistant become better at recognizing and understanding what I’m saying and more consistently serving up relevant responses. Apple has acquired a number of companies over recent years which should help with this and I’m hoping we’ll see some big advances this year, including more conversational and contextual understanding.

In the context of Siri, I’d love to see a home speaker from Apple that would compete with the Amazon Echo and Google Home. These devices, specifically the Home, have grown on me over the past few months as I’ve had one in my home but, as someone who’s fairly heavily invested in the Apple ecosystem, I’ve found them frustrating. I’ve had to use other music services, messaging platforms, reminder apps and so on with these devices and, while I’ve made that transition, in some cases I’ve simply found those devices less useful as a result. An Apple speaker that would combine Siri, Apple Music, AirPlay, Reminders, iMessage, and more would be a huge advance for me and, I would imagine, for tens of millions of others. I think others have been reluctant to trust Amazon or Google with their personal data or have suspected these devices were really Trojan Horses for selling more goods or ads and, therefore, resisted them.

But Siri in that device has to be really good because the bar Amazon and Google have set in the home speaker space is high, at least in terms of voice recognition. Siri has so far always been somewhat constrained by the devices which contained it, none of which were designed first and foremost for fantastic voice recognition. A home speaker would remove that excuse. Amazon and Google have shown us what can be done in a dedicated device with mic arrays designed for far-field voice recognition and Apple now has to show it can match them and, ideally, exceed them in other areas such as ecosystem integration and audio quality.

macOS needs to Continue to Integrate and Differentiate

One of the key themes of recent years when it comes to what is now macOS is integration with iOS, whether in the form of features like Continuity and Handoff which directly integrate with other devices, or whether it’s in the form of user interface conventions, apps, and so on which now exist across Apple’s portfolio. But as Apple pushes the iPad Pro towards becoming a more powerful computer, it can’t simply leave the Mac and macOS where it is – it needs to establish a distinct identity and purpose for them in contrast to the iPad Pro. That means continuing to push the boundaries of what the Mac can do that an iPad can’t and demonstrating what it can do uniquely well because of the OS, the power of the hardware, and so on.

Beyond that increased differentiation, there’s still a role for integration and borrowing concepts and user interfaces from iOS. Nowhere is that truer than in iTunes. That software began life as a way to organize and then later, sync music to iPods, but it has become so much more since. Every time I fire up iTunes on my Mac, I have to first navigate to the right broad section – Apps, Music, Movies, TV Shows, or Books – and then to either a store or library mode (or more in the case of Music). There’s simply too much going on in that app and it needs to be broken out into separate apps for Music, Video, and syncing, at the very least (possibly even with a separate store for all of the above) to focus those content apps. That would streamline consumption, make the apps less confusing and easier to use, and make people more willing to use services like Apple Music on their Macs.

tvOS needs a Clear Focus

Ever since Apple launched the fourth generation Apple TV, it’s had a dual role as both a video consumption device and a sort of low-powered gaming console. That’s caused some confusion among video-centric users who in some cases don’t see why they have to spend significantly more on an Apple TV relative to comparable Roku devices (not to mention much cheaper Chromecasts) but it has also left gamers a little frustrated that the Apple TV doesn’t do more. Apple needs to decide how serious it is about gaming and build the Apple TV hardware and tvOS to match. It either needs to shift more clearly towards the casual gaming that’s always been the hallmark of iOS, or power up both the hardware and software to enable something more like what people are used to from consoles.

But it also needs to continue to improve the TV viewing experience. I actually like the TV app Apple introduced last year quite a bit – it consolidates my viewing in a number of apps like Hulu, CBS, and Apple’s own TV and Movies apps and shows me the next episodes or available movies in each. But it’s still glitchy and incomplete – Netflix is the obvious standout on the app side but it also frequently misses episodes I’ve watched (or ones I haven’t) and serves up the wrong episode next in the queue. The concept is good but the execution needs polish. And I really need a way to separate the stuff my kids watch from the stuff I watch. Too often, the first part of my queue is made up of half-watched cartoons rather than what I want to watch next. Some combination of profiles and/or time of day smartness could solve that problem pretty easily.

watchOS needs to Figure Out the App Model

Last year’s WWDC and the fall hardware launch felt like a narrowing of focus for the Apple Watch around health and fitness. I think that was smart and reflected the fact apps on the Watch really haven’t taken off, despite several tweaks to the model. I would expect to see continued enhancements to the health and fitness features on the Watch, possibly including sleep tracking and, ideally, including the ability for third party Watch bands to incorporate more advanced sensors and feed data back to the Watch itself, though that may have to wait until the fall. Apple still needs to figure out what the role of apps is on the Watch and how to make them easier and more compelling to use.

There’s probably more I could add here but that seems to be plenty to go on with. I’m optimistic I’ll get at least some of what I’m looking for next week and I’m certain there will be things I haven’t thought of but which turn out to be great additions too. One thing is certain: with more ground than ever to cover, Apple’s going to have a tough time getting through everything in one two-hour keynote. I wouldn’t be surprised if we see some more announcements either before Monday or in later sessions.

Are We Wrong about the Future of Urban Commuting?

Over the past few months, there has been a lot of talk about the future of transportation; from the car as a service to self-driving cars. We seem to be expecting many changes that will redefine how we get from point A to point B.

Ride-share companies Uber and Lyft have grown in popularity capturing consumer dollars and press attention, although not always for the right reasons. If making a brand a verb is a measure of success and awareness, then Uber made it as “Uber it” seats quite happily with “Google it”.

Having presenters on tech conference stages talk about how millennials will only want an Uber account, not a car or judging by how many people we know to use these services and are anxiously waiting for their driving days to be over, might not be the best way to access how realistic or inclusive this future is.

At Creative Strategies, we just conducted a study across 1,000 US consumers of their preferences when it comes to commuting as well as their expectations about the future of it. The key takeaway is brands and industry watchers might want to consider the reality of how America feels about this topic is not dissimilar to how America felt about the recent US presidential election. Urban vs. rural, millennials vs. baby-boomers, higher income vs. lower income, male vs. female. All play a role in separating reality from fantasy.

Ride Sharing Services are Growing Thanks to a Few

If you look at where most Uber and Lyft users (based on the app phone usage) are coming from, it is quite easy to spot an income gap in the userbase. 31% of Uber app usage and 24% of Lyft app usage over the last quarter of 2016 came from American consumers falling in the top 25% income bracket.

In our research, only 18% of the consumers interviewed said they use Uber. Another 4% stated they use Lyft. Interestingly, 7.5% stated they shifted from Uber to Lyft as Uber has been in the news for all the wrong reasons. As one would expect, usage grows among early tech adopters. 29% of them say they use Uber with 8% saying they use Lyft. What is also interesting, however, is this market segment seems to be even more sensitive to the news surrounding Uber. An extraordinary 25% said they switched from Uber to Lyft. Millennials who, as a group, are usually portrayed as having a high social responsibility, do not seem to be impacted by the negativity surrounding Uber as only 11% said they shifted away to use Lyft.

The majority of Americans still own a car (82% to be exact) and another 9% have access to one they share with a family member. Ownership declines slightly among millennials to 72%, while car sharing rises to 14%. When it comes to planning their commute, only 4% will consider whether to drive their car or use a ride-share service and another 2% would consider whether to use a car service or a ride-share service.

It seems safe to say the death of car ownership is highly exaggerated.

Design Trumps Safety and Environment

I have been arguing for quite some time that, while we wait for self-driving cars to get to market, there is a lot of value car manufacturers can deliver, especially around safety. Sadly, however, safety is not the primary factor that drives consumers’ purchase. Design beats safety 50% to 24%. Safety is even less of a priority among early tech adopters. 55% mention design as the key driver and only 10% mention safety.

Safety was, however, on the minds of those consumers planning to replace their car over the next 12 months. 49% indicated blind spot warning systems, 49% mentioned parking cameras and 36% said auto-braking for collision avoidance were all things they were looking at. Although those potential buyers might not see these individual features as adding to their safety, they certainly do. The fact consumers do not think of these features as safety ones is interesting, as it could pose a challenge on how to advertise them in a commercial, calling more for an individual show and tell than an overarching claim of safety.

Fully electric cars score high with early tech adopters. 52% see fully electric capability as a key feature of their next car. Only 17% of the supposed environmentally conscious millennials prioritize a fully electric car for their next purchase.

Early Tech Adopters, not Millennials, are Ready for Self-Driving Cars

Big brands like Apple, Google, Tesla and more might all be in the race to deliver self-driving cars but consumers are certainly not holding their breath.

Only 11% of the consumers we interviewed said they are looking forward to computers taking over the driving and 29% stated they would never be seen in a self-driving car. Interestingly, 21% trust the technology but believe regulations will take a long time to make self-driving cars a reality. Early tech adopters are more open to the idea, with 29% looking forward to having computers take over driving and another 26% looking forward to using the time to catch up on reading or other content.

Who consumers trust with bringing to market a reliable self-driving car is not a done deal, at least when it comes to the runner-up brands. Tesla is the winner across all segments but the number two and three spots change quite a bit, depending on the group you are looking at. Overall, 28% of consumers believe in Tesla while 24% believe it will be a traditional car manufacturer. Among early tech adopters, the support goes to more tech brands, with Apple at 30% and Google at 17% but Tesla still best encapsulating the blend between cars and tech with 34%. Millennials also see Tesla as the brand most likely to deliver a reliable self-driving car. 38% in the group mentioning the Tesla brand. The number two choice is Google at 23% followed by traditional car manufacturers at 15% and Apple at 13%. Women hold almost similar faith in traditional car manufacturers (28%) as they do Tesla (27%). With men, the split between the two is 22% to 28% for Tesla.

I am not surprised consumers don’t have it all figured out when it comes to the future of commuting. A world where we no longer own cars and rely on self-driving ones when we are a passenger is quite different from today. A challenge for all involved though, when it comes to brand trust and intent, is that consumers are certainly influenced by how vocal companies are about their plans.

Imagining the Future of Commuting is Harder than Imagining the Future of Computing

How consumers feel about cars and their commute today, coupled with the uncertainty about artificial intelligence taking over from humans, shows how imagining how different our commute will be in 10 years is more complicated than thinking we could take our phone or PC outside of our home or office. Harder not only because it questions our beliefs in technology but because it wants to change habits and practices that have been set for decades. Think about waiting to turn 16 or 18 to get your driving license and how life empowering it is getting your first car. How can getting your own Uber account make up for that? How can you trust a computer in a car not to crash as much as your PC or phone does and knowing your life and the life of others depends on it? Big questions I am not sure consumers have answers for yet.

Are AR and VR Only for Special Occasions?

As exciting and fast moving as the topics of augmented reality (AR) and virtual reality (VR) may be, there’s a critical question that needs to be asked and thoroughly analyzed when it comes to these technologies.

Are they well-suited for regular use or just special occasions?

While simple on the surface, the answer to the question carries with it key implications not just about the potential size of the market opportunity, but the kinds of products that should be created, the manner with which they’re marketed and sold, and even when different types of products should come to market.

As an early enthusiast of both AR and VR—particularly after having tried several devices, such as Microsoft’s HoloLens and HTC’s Vive—it was (and still is) easy to get caught up in the excitement and potential of the technology. Indeed, the first time you get a demonstration of a good AR or VR headset (and not all of them give a great experience, by the way), you can’t help but think this is the future of computing.

The manner with which VR engulfs your visual senses or AR provides new ways of looking at the world around you are pretty compelling when you first try them. That’s why so many people and companies, from product makers to component suppliers to software makers to retailers are so eager to offer an AR or VR experience to as wide a range of consumers as possible. The thinking is (or has been) that once people try it, they’ll be hooked.

While that’s certainly a valid and worthwhile effort, as time has passed, it’s not entirely clear that merely exposing people to AR and VR is all that’s necessary to achieve the kind of market success that many presumed would occur. In fact, a number of recent consumer studies have highlighted what general market trend observations will also confirm—AR and VR products are indeed growing, but at a slower pace than many (including me) expected.

So, the obvious question is why? Why aren’t more people getting into AR and VR and purchasing more of the products and software that provide the experience?

While there isn’t likely one answer to that question, one can’t help but think about the underlying assumptions that are buried in the title and first question of this column. Is it realistic to think that AR and VR are ready for general use, and if they’re not, is it fair to assume that people are willing to spend good money on something they may only use occasionally?

At its essence, that’s the fundamental question that needs to be answered if we are to understand how the AR and VR markets are likely to evolve.

To be fair, some of the technological limitations facing current products certainly have an impact on the market. Large, clunky, wired headsets are not exactly the stuff of mass market dreams, after all.

But even presuming the technology can be reduced to a manageable or even essentially “invisible” form into regular-sized glasses—and it will still be a long time to really get things that small—is the very fact that it’s in a form that has to be put on our face going to keep it from ever really succeeding?

As we’ve seen with smartwatches, just because technology can be reduced down to a reasonable size and into a well-known form, doesn’t mean people will necessarily adopt it. Even cool capabilities haven’t been able to convince people who’ve never adapted to or cared for wearing a regular watch to don a smartwatch. They just don’t want it.

In the case of glasses, it turns out that over 60% of people do wear some kind of eyeglasses (and another 11% or so wear contacts), but the results vary dramatically by age. For the highly targeted segment under age 40, eyewear usage is less than half of that, meaning nearly ¾ of consumers under age 40 don’t wear corrective eyewear. Trying to convince that group to put something on their face other than for occasional special purposes seems like a daunting task, regardless of how amazing the technology inside it may be.

Even if we get past the form factor issues, there are still potential issues with the supply of engaging content and experiences once the initial excitement over the technology wears off—which it does for most people. A great deal of effort from companies of all shapes and sizes is happening in VR and AR content, so I do expect things to improve, but right now there are a lot more one-time demos than applications with long-term lasting value.

Ironically, I think it could be some of the easiest and simplest types of applications that end up giving AR, in particular, more lasting power and market influence. Simple ways to augment our knowledge or understanding of real world objects or processes will likely seep slowly into general usage and eventually reach the point where we’ll have a hard time imagining life without them. We’re not there yet, though, so for now, I think AR and VR are best suited for special occasions—with appropriate adjustments in market expectations as a result.

Podcast: Microsoft Surface, LeEco, Lenovo, Huawei

This week’s Tech.pinions podcast features Carolina Milanesi, Jan Dawson and Bob O’Donnell discussing numerous events and companies related to China, including Microsoft’s Surface Pro launch in China, Le Eco’s US restructuring, earnings and smartphone shipments from Lenovo, and new PC announcements from Huawei.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Is Google waiting for Apple to Popularize Mobile AR?

Google’s I/O developer conference is interesting because the company covers a lot of ground and talks about a lot of things in the keynotes. As a result, not everything that merits mainstage discussion one year is still top-of-mind for executives a year later. Still, I was somewhat surprised to see how little stage time Tango—Google’s mobile augmented reality technology—got at this year’s event. This despite the fact augmented reality has been a key focus of other recent developer conferences, including Facebook’s F8 and Microsoft’s Build. I can’t help but wonder: Is Google struggling to find progress with Tango or is it just waiting for Apple and its developers to validate the mobile AR market?

Two to Tango
Nearly a year ago, I wrote about Google’s decision to turn Project Tango into a full-fledged initiative at the company. Tango utilizes three cameras and various sensors to create a mobile augmented reality experience you view on the phone screen. By tracking the space around the phone and the device’s movement through that space, Tango lets developers drop objects into the real world. In January, I wrote about my experiences with the first Tango phone from Lenovo and noted that a second phone, from ASUS, was also on the way. More than five months later, Google executives said on stage at I/O that there are still only two announced Tango phones. Worse yet, much of the discussion around the technology was a rehash of previous Tango talking points and the one seemingly new demo failed to work on stage. In fact, aside from a video about using Tango in schools (Expedition AR), Tango’s biggest win at I/O seemed to be the fact Google is using a piece of the technology, dubbed WorldSense, to drive future Daydream virtual reality products.

My experience with Tango back in January showed the technology still had a long way to go, with the device heating up and apps crashing on a regular basis. But it also showed the potential of mobile AR. The fact is, developers can drive a pretty rudimentary mobile AR experience on most any modern smartphone, as the short-lived hype around Pokémon Go proved. But for a great experience, you need the right hardware and you need the right apps. That Google hasn’t seemed to make much progress in either, at least publicly, is surprising, as it would seem to be an area where the company has a substantial head start on Apple’s iOS and iPhone.

Waiting for Apple?
One of the biggest predictions around Apple’s next iPhone is that it will offer some sort of mobile AR experience as a key feature. Apple hasn’t said it will but CEO Tim Cook’s frequent comments about AR, and the company’s string of AR-related acquisitions, certainly point in that direction. Which has me wondering if maybe Google has slowed down on Tango because it needs Apple (and its marketing muscle) to convince consumers they want mobile AR. Or perhaps, more importantly, mobile developers should embrace the technology and create new apps.

Apple will likely ship the next version of the iPhone in September or October of this year. But the company’s Worldwide Developers Conference is happening in just a few weeks. So the question becomes: Does Apple begin to talk about augmented reality experiences on the iPhone now or later? If now, does that mean the company will support AR on existing iPhones or will only the new iPhone support the technology? As I noted last September, the addition of a second camera to the iPhone 7 Plus makes it a reasonable candidate for some AR features.

Chances are, if Apple is planning a big augmented reality push for iPhone, it already has key developers working on apps under non-disclosure agreements. But to drive big momentum, the company will need to get a significant percentage of its existing developer base to support it, too. It will be interesting to see if and what Apple discloses during the big keynote on June 5th. If Apple does put its full weight behind such an initiative, it can move the market. Such a move might jumpstart adjacent interest in Tango.

As for Google in the near-term, Daydream VR is clearly a bigger priority, as the company is working with HTC and Lenovo to bring to market standalone headset products and executives noted that more phones would soon support the technology, including Samsung’s flagship Galaxy S8 and S8+. In fact, Google predicted that, by years’ end, there would be tens of millions of Daydream-capable phones in the market. That’s the kind of scale Google likes and the kind of scale Tango can’t hope to achieve. Yet.

The Battle for Smartphone Growth in the US

The US smartphone market is maturing rapidly, with the vast majority of US phone users now using smart rather than feature phones. As a result, growth is slowing dramatically. That, in turn, means the vast majority of market share gains will now come from customers switching behavior between vendors and platforms rather than from new users and sales growth will come almost exclusively from switching and upgrades.

Smartphone Base and Sales Growth both Slowing

Growth in the US smartphone base and sales of smartphones are both slowing. The first chart below shows year on year growth in the smartphone base in the US across the five largest operators:

As you can see just over two years ago, we saw 28 million new smartphone users in the base year on year, but that number has now fallen to less than half at just 13 million in the past year. That slower growth itself would already be having an impact on smartphone sales by the carriers, but we’re also seeing a lengthening of the upgrade cycle as people hold onto their phones for longer, both because their phones are becoming better and more reliable and because new payment structures incentivize that behavior. As such, carrier smartphone sales have been falling for several quarters:

That decline really began to kick in after the iPhone 6 bump was over, in Q4 2015, which was the first decline ever in annualized smartphone sales in the US market. But it has continued since then. That’s the result both of the smaller number of new smartphone customers in the market, as seen in the first chart, and the slower upgrade cycles I mentioned.

Upgraders and Switchers Make Up over 90% of Smartphone Sales

What’s even more interesting is if you compare the numbers shown in the two charts to derive the percentage of smartphone sales that go to new buyers versus those upgrading an existing smartphone or switching to a different brand of smartphone. The latter category made up 70% of postpaid smartphone sales in Q3 2013 but over 90% for two of the last three quarters:

In other words, nine out of ten postpaid smartphone sales are now going to people who are replacing an existing smartphone, which means the emphasis for all smartphone vendors in the US should be on two things:

  • Convincing people who have one of your smartphones already to replace it with a newer version
  • Convincing people who own a competing smartphone to switch to one of yours.

It’s no coincidence, therefore, that Tim Cook has mentioned upgrade and switching behavior repeatedly on the last few Apple earnings calls – these two behaviors will make up the bulk of iPhone sales in many markets around the world.

The Value of Driving Faster Upgrades

It also helps to explain why we saw Apple this week launch a new section of its website specifically targeted at switchers from Android. This is where many of Apple’s customers will come from in the future, especially in the US but also in many other markets. It also means smartphone vendors in the US and other mature markets are going to have to make the upgrade argument more explicitly. Given the relatively high loyalty rates for some leading brands, switching is going to make up a minority of total sales and many sales will come from within the base.

In the past, the carriers’ pricing and upgrade policies around smartphones drove people towards a two-year cycle and though that was never the universal pattern, it held true for many people. But now, under the new installment and leasing plans, that default two-year upgrade has gone away and been replaced by an average closer to three years. As such, there’s an enormous amount of additional smartphone sales to be driven by making the case for people to go back to a two-year cycle and for a smaller number to adopt a one-year upgrade cycle.

The value here is obvious – on a base of 100 million phones sold over the last three years, a three-year upgrade cycle suggests 33 million phones sold a year, whereas shortening it by six months to 30 months drives 40 million sales a year. Shortening it by a full year to 24 months would drive 50 million a year. As such, there’s a massive incentive for vendors to drive upgrades and we’re going to see an increasing emphasis on that factor in the coming years across the board.

Arguably, some Android vendors are already trying to drive faster upgrade behavior with poor operating system upgrade support after the first eighteen months or so, while Apple has introduced the iPhone Upgrade Program to drive one-year cycles for at least a portion of its base. Leasing or “device-as-a-service” models like these are going to grow in popularity and they’re increasingly going to be driven by the smartphone makers themselves rather than just the carriers. That also fits with a broader push towards subscription models for nearly everything in our digital lives. We should expect those programs to increasingly layer on other elements including insurance plans for devices, content services, additional devices like wearables, and so on.

Millennials will Drive the Digital Transformation of the Workplace

If you ask ten different organizations what digital transformation is, you will likely get ten different answers. As is often the case, the answer depends on where each organization is in the process of integrating technology into their workflow. Many believe digital transformation means to get rid of paper, which of course is an oversimplification and not entirely the point. Others believe it is about using technology to do the same things we have always done. In other words, much of the focus is on digital and not so much on transformation.

Mobile was a Test Run

Let’s be honest, enterprise did not see mobile coming. Sure, they saw mobile phones but the impact smartphones would have on their IT department and business was never clearly understood until it was upon them. Smartphones were the start of employees’ empowerment. Carrier subsidies took away the cost barrier for the latest technology, making it accessible to the masses and those masses wanted to use that technology at work, not just at home.

“Bring Your Own Device” (BYOD) could have never been a trend when technology was so expensive only a few could afford it. We went from wanting to take home the PC we used at work to bring to work the smartphone we used at home. Smartphones were apps’ Trojan horse. Once we had our phones with us in the office, we wanted to continue to use the same applications and services we used at home. So, to BYOD we added BYOA (“Bring Your Own App”) as it was about the overall experience new mobile platforms such as iOS and Android were delivering.

Most organizations went through three phases: denial, resistance, and acceptance. Denial lasted a few years as devices came through the back door, then came a few years of resistance trying to impose mobile device management tools to limit what users could do, all in the name of security. Finally, we got acceptance with iOS now present in most Fortune 500 organizations recording a satisfaction rate of 96%.

The Rise of Millennials in the Workforce

What played to enterprise’s favor, at least a little, was the fact not everyone in their organizations, especially their C-suites, was quick to adopt these new devices and apps. That digital divide is going away faster and faster as new graduates get hired and younger managers move up the corporate ladders.

According to the U.S Census Bureau, millennials surpassed Generation X as the largest part of the American workforce back in 2015. Projections put millennials as comprising more than one of three adults in America by 2020 and 75% of the workforce by 2025.

It goes without saying millennials are very tech savvy. But the differences with baby boomers do not end there. Research has shown boomers identify their strengths as hardworking, optimistic, and used to navigating in organizations with large corporate hierarchies, rather than flat management structures and teamwork-based job roles. Millennials are quite drastically different: well educated, self-confident, multi-taskers who prefer to work in teams rather than as an individual and have a good work/life balance.

A recent study by Merrill Edge showed millennials have very different priorities in life compared to boomers. With the focus on personal achievements, millennials want to work at their dream job (42% vs. 23%) and travel the world (37% vs. 21%).

What is a Millennial’s Dream Job?

At Creative Strategies, we asked over 1,400 18 to 24 years old in the US what would make them not choose a company to work for after they were offered a job. While 35% were just happy to get a job, 46% would see not being able to work flexible hours as a dealbreaker. 21% would walk away from a job that did not let them use a smartphone for work in conjunction with their laptop or desktop, while another 17% could not tolerate an IT department that restricts what can be done with a smartphone. Finally, 14% could not be in a job that did not offer collaboration practices that fit their desired workflow, such as using apps like Google Docs or Slack, as well as video conference support.

Workflow is different for millennials. Aside from prioritizing collaboration, 65% said their preferred method to communicate is messaging apps. When it comes to collaboration, Google reigns supreme with 81% of US millennials regularly using Google Docs, 62% Google search, 59% Google Mail. Outside of Google, Apple iMessage scored the highest, with 57% of millennials saying they regularly rely on it, followed by Microsoft Word with 51%.

When it comes to devices, given a choice of laptop brands by their employer, there are only two brands that seem to matter: 62% would pick an Apple Mac and 14% would choose a Microsoft Surface Pro. Mobility is also no longer a “nice to have”. 34% of millennials say it is extremely important that the software, services and business processes they use for work are available on mobile as well as on a laptop. Finally, when coming into a job, 46% would prefer to be able to choose what laptop is given to them.

Digital Transformation to Attract and Retain

Transforming your business by embracing technology and the innovation technology empowers when it comes to business models and workflows, is necessary to attract talented employees. If that was not enough of a driver for companies, they should think about where the big spenders will come from. If 75% of the workforce by 2025 will be made up of millennials, where do you think the largest source of revenue will come from for businesses around the US? Where will the buying power be if not with millennials? Businesses will need to embrace digital transformation to deliver what their future customers will want.

The Digital Car

The evolution of the modern automobile is arguably one of the most exciting and most important developments in the tech world today. In fact, it’s probably one of the most important business and societal stories we’ve seen in some time.

The leadership at no less venerable a player than Ford Motor Co. obviously felt the same way and just replaced their CEO, despite his long-term tenure with the company, and the record-setting profits he helped drive during his 3-year leadership there. The reason? Not enough progress on advancing the company’s cars forward in the technology domain, particularly with regard to electric vehicles, autonomous driving, and new types of transportation service-focused business models.

As has been noted by many, these three capabilities—electrification, autonomy, and cars as a service—are considered the key trends driving the auto market today and into the future, at least as far as Wall Street is concerned. In reality, the picture isn’t nearly that simple, but it is clear that tech industry-driven initiatives are driving the agenda for today’s carmakers. And it’s pushing many of them into uncomfortable positions.

It turns out, however, that in spite of the importance of this critical evolution of automobiles, this is one of those issues that’s a lot harder to overcome than it first appears.

Part of the problem is that as cars have advanced, and various technologies have been integrated into them, they’ve evolved into these enormously complex machines. Today’s automobiles have as many as 150 programmable computing elements (often called ECUs or Electronic Control Units), surprisingly large (and heavy) amounts of wiring, numerous different types of electronic signaling and interconnect buses, and up to 100 millions of lines of software, in addition to the thousands of mechanical parts required to run a car. Frankly, it’s somewhat of a miracle that modern cars run as well as they do, although reports of technical glitches and other problems in newer cars do seem to be on the rise.

In addition to the mechanical and computer architecture complexity of the cars themselves, the organizational and business model complexity of today’s car companies and the entire auto supply chain also contribute to the problem. Having evolved over the 100+ year history of the automotive industry, the system of multiple Tier 1 suppliers, such as Harman, Delphi, Bosch and others, buying components from Tier 2 and 3 suppliers down the chain and car brand OEMs (such as Ford) piecing together multiple sub-systems from different combinations of Tier 1s to build their cars is notoriously complex.

But toss in the fact that there are often groups within the car maker that are specifically responsible for a given ECU (such as, say, heating, a/c and other “comfort” controls) and whose jobs may be at risk if someone suggests that the company changes to a simpler architecture in which they combined the functionality of multiple ECUs into a smaller, more manageable number and, well, you get the picture.

If ever there was an industry ripe for disruption, and if ever there was an industry in need of a tech overhaul, the automotive industry is it. That’s why many traditional carmakers are concerned, and why many tech companies are salivating at a chance to get a piece of the multi-trillion (yes, with a “t”) dollar global automotive industry.

It’s also why companies like Tesla have made such a splash. Despite their very modest sales, they’re seen as a credible attempt to drive the kind of technological and organizational disruption that many people believe is necessary to transform the automotive industry. In truth, however, because of the inherent and ingrained nature of the auto supply chain, even Tesla has to follow many of the conventions of multiple Tier 1 suppliers, etc., that its rivals use. The problem is that deeply embedded.

But even as those issues get addressed, they are really just a prelude to yet more innovations and opportunities for disruption. Like many modern computing devices—and, to be clear, that’s what today’s cars have become—the technological and business model for autos is slowly but surely moving towards a software and services-focused approach. In other words, we’re moving towards the software-defined “digital car.”

In order for that to happen, several key challenges need to be addressed. Most importantly, major enhancements in automotive security—both through architectural changes and software-driven advances—have to occur. The potential for life-threatening problems if either standard or autonomous cars get hacked should make this point painfully obvious.

Connectivity options, speed and reliability also have to be improved and that’s where industry-wide efforts like 5G, and specific products from vendors like Qualcomm and Intel can make a difference.

Finally, car companies and critical suppliers need to figure out the kinds of services that consumers will be willing to pay for and deliver platforms and architectures that can enable them. Like many other types of hardware devices, profit margins on cars are not very large, and with the increasing amount of technology they’re going to require, they could even start to shrink. As a result, car companies need to think through different ways of generating income.

Thankfully, a number of both tech startups and established vendors, such as Harman, are working on creating cloud-based platform delivery systems for automotive services that are expected to start bringing these capabilities to life over the next several years.

As with any major transition, the move to a digital car model won’t be easy, fast, or bump-free, but it’s bound to be an interesting ride.

The Case for a Siri Speaker

Rumors have been circulating Apple will join Amazon and Google and make their own version of a smart speaker to compete with the Echo and Home speakers. Observing the commentary surrounding this rumor has certainly revealed many opinions on the matter, both in favor and against it. I even sense a debate inside Apple on whether a smart speaker is a fad or if it has staying power. I lean in the direction of Apple entering this market and competing with Google and Amazon and would like to make the case this product should exist.

Whole Room Audio
The sales of Bluetooth speakers over the past few years did not get much attention even though it was a growing trend. These small and affordable units hit a pain point for many consumers in that they did not have ample speakers in many places where they wanted to consume their music. Contrary to popular opinions, as these products were starting to gain popularity, most of them rarely left the house and were simply used in rooms where a sound system did not exist (which is most rooms in the average consumer home).

The home environment is very different than the public one. Those who express their pessimism over the smart speaker solutions often misunderstand the average consumer home dynamic. In common rooms like the living room, kitchen, patio, family room, etc., access to music is either very limited or non-existent. Bluetooth speakers filled this void and validated the desire of consumers to have access to music in more rooms of the house.

From the value proposition of whole room audio alone, this would be a smart play for Apple and adding the smarts of Siri opens up a rich ecosystem as well. Apple Music is an example of something that would benefit from this hardware significantly. As every available bit of data we have proves, hardware for Apple drives their services. Hardware built to uniquely take advantage of those services will drive it even further. It is not a stretch to say, if Apple sold a smart speaker, subscriptions to Apple Music would increase significantly due to Apple’s ability to tightly integrate hardware, software, and services.

Siri is always with You and Can always Hear You
One of the arguments against a Siri speaker is you always have your iPhone with you, making the iPhone the proper place for you to always access Siri. The flaw in this argument is, while your iPhone may always be with you, or not far from you, can it always hear you? The answer is no. When my iPhone is in my pocket, accessing Siri doesn’t work. When my iPhone is in the living room and I’m in the kitchen cooking, Siri can’t hear me. The counter-argument posits that Apple Watch or AirPods fill this hole since Apple Watch is always on my wrist or AirPods are in my ear. The reality, however, is not every iPhone owner will own one of those products in the foreseeable future. even if this argument is correct, the question remains: where does my music play?

This is where the home dynamic challenges Apple’s traditional and very individualized view of technology. The home is a shared a common environment, so to say everyone should just listen to their iPhone with headphones or AirPods on while walking around the house is a distorted view of what goes on in the home.

Here again is why the music experience and value of whole room audio alone makes a strong case for a Siri speaker to exist. But the challenge of putting Siri into something that can always hear you remains. A smart speaker can be purpose built to be a better listening device than your smartphone, watch or even earphones can be. This is one reason why the Amazon Echo is perceived as having better natural language processing than Siri. In a quiet, close range environment, Siri understands me as well as the Echo. However, the Echo hears me better in the normal dynamics of the home, thanks to how the microphones are built and tuned.

The Battle for the Smarthome
What smart speakers are showing us is the growing battle for the smart home platform. Voice control has hit its stride as the most convenient way to interact with your smarthome. I’d also add, voice is on the cusp of becoming the mechanism to eliminate the remote with our TV experience.

The battle for the smartphone will be one fought by the number of endpoints in your home which you can interact with in some way. Amazon wants to get an Echo in every room and so does Google. Using the assistant on your phone makes sense in many contexts, however. In the home, having other ways to interact with your smart assistant beyond just your smartphone, smartwatch, or earphones, only increases the potential chances to engage with a smart assistant service.

The goal of companies battling for smart assistant domination should not limit potential chances to engage but extend their assistants far and wide in order to make sure the consumer always has a convenient way to engage. If they don’t, they risk losing key experiences to their competition.

Podcast: Google I/O, IoT World

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing Google’s IO event, including details on Google Assistant, Google Home, Android, and AR and VR platforms, along with some brief comments on the recent IoT World conference.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Data Consumption Continues to Grow. Why Are Network Equipment Makers Struggling?

Data consumption continues to skyrocket, growing at about 50% per year. Average usage in mobile now exceeds 4GB per month in the US, with video an ever increasing percentage of that. Fixed broadband isn’t standing still either, with the typical Netflixing household consuming north of 200GB per month.

You would think these would be boom times for the major suppliers of network equipment to the operators. This is a market where three players — Ericsson, Nokia, and Huawei — split about $125 billion in annual global mobile network capex. But in reality, Ericsson and Nokia have been struggling of late and the forecast isn’t all that favorable. Ericsson’s mobile network business declined 10%+ in 2016 and they forecast a decline of 2-6% for 2017, indicating in their annual report that the addressable market for networks is flat to down 2% in the 2016-2018 period. Nokia’s numbers are a bit better, in part because 2016 was the first year of full reporting post the Lucent acquisition, but they nevertheless project flat-ish sales for networks this year. Cisco has had a rough time of it as well, announcing a cut of 1,100 workers this week, on top of a 7% workforce reduction in 2016. By contrast, Huawei’s revenues from network operators grew 24% in 2016, although nearly 60% of that business comes from Asia-Pacific (40% China).

Given the continued robust growth in data consumption, why is the network business so crummy and will the picture get any brighter? It is difficult to find any one reason for the relatively flat market. Our analysis boils it down to six broad factors.

1. The global nature of the business. The major suppliers do business in 100+ countries and with hundreds of operators. There are some parts of the world where network spend has gone way off. In Europe, for example, much of the 4G LTE buildout is complete but follow-on work, related to increases in network capacity, has not been as robust as anticipated. Additionally, the macroeconomic environment in certain regions, such as the Middle East and Latin/South America, has been challenging. There has also been operator consolidation in large markets, such as India, which has affected the addressable market.

2. Their share in growth markets under-indexes. The major 4G LTE buildouts in markets where Ericsson and Nokia are strongest, such as North America and Western Europe, have peaked, and those markets are now driven more by harder to project capacity enhancements and small cell deployments. Huawei’s share is stronger in geographies where there is still a large 3G/4G buildout.

3. Network operator revenues have flattened. The U.S. market is symptomatic. Although there is continued growth in data consumption, prices have declined and mobile revenues are not growing. This is playing out similarly in numerous geographies, putting put pressure on capex spend, with operators pushing their vendors harder on price.

4. The Huawei factor. We don’t see this in the US market because Huawei has been largely kept out of the network equipment business here but Huawei has taken significant share from Ericsson and Nokia and has also been very aggressive on price. Huawei now leads the global market, with 30% share, compared to 28% for Ericsson and 24% for Nokia, according to Dell’Oro Group.

5. Not capturing fair share of the fixed broadband market. Although mobile capex is flat to down in some markets, fixed line (broadband) capex is seeing an uptick, driven by fiber deployments, DOCSIS 3.1 upgrades and, in some geographies, spending on G.fast and PON. Ericsson and Nokia’s share in fixed under-index that of mobile. Nokia’s recent acquisition of Gainspeed is a signal of its efforts to grow that market segment.

6. Cost structure has not kept up with network transformation. We are in the early innings of a major transformation in networks from a hardware to a software-driven model. This impacts the equipment suppliers in three ways: they need to lower their cost structure, evolve the skill set of their workforce, and recognize that the competitive playing field will expand.

Even though the picture is currently mixed, with Ericsson especially under some pressure, I am bullish on the long-term prospects. I’ve spent time with senior level executives at the major equipment suppliers in recent months and they all recognize the business will be fundamentally different in five years than it is today.

In some ways, we’re in a bit of a ‘pause period’ before the next big wave of opportunity. First is IoT and the ability to connect the billions of devices that are projected. This market is materializing but growing more slowly and more unevenly than thought. So it’s a long game. Second, the transformation from hardware to software. The suppliers will have to keep in lockstep with their customers, the operators, on this one, and capture their fair share of this market going forward, which will undoubtedly feature a larger and more competitive playing field. There is still lots of work to be done to determine how to price for a software/cloud/network slice world. Third, a lot of resources are starting to be devoted to 5G, but it will be a couple of years before 5G-related spending begins in earnest. Finally, with much of the growth in traffic coming from video, equipment suppliers will need new technologies and offers to capture their fair share of this opportunity.

This transformation will also involve a new suite of potential customers, partners, and competitors. The Ericssons and Nokias of the world will need to do more business with major ‘webscale’ companies, such as Google, Facebook, and Amazon. A more open, software-centric network environment means there will be more competitors and lower barriers to entry but also the opportunity to partner with best of breed firms. Ericsson and Cisco are still, for example, in the early innings of their partnership. Another example is managed services and the broader world of OSS/BSS, where the network equipment suppliers will have to take share from (or work with) firms such as Amdocs, IBM, Accenture, and Oracle.

The future of the network equipment market won’t be one where three firms carve up some 80%+ of the revenues. But there’s plenty of market opportunity, as long as they capture their fair share.

Google’s Fading Focus on Android

Google is holding its I/O developer conference this week and Wednesday morning saw the opening day keynote where it has traditionally announced all the big news for the event. What was notable about this year’s event, though, was what short shrift Android – arguably its major developer platform – received at the keynote and that feels indicative of a shift in Google’s strategy.

Android – The First to Two Billion

One of the first things Google CEO Sundar Pichai did when he got up on stage to welcome attendees was run through a list of numbers relating to the usage of the company’s major services. He reiterated Google has seven properties with over a billion monthly active users but also said several others are rapidly growing, including Google Drive with over 800 million and Got Photos with over 500 million. But the biggest number of all was the number of active Android devices, which passed two billion earlier this week. Now, that isn’t the same as saying it has two billion monthly active users, since some of those devices will belong to the same users as others (e.g. tablets and smartphones), while others may be powering corporate or unmanned devices. But Android is a massive platform for Google and arguably the property with the broadest reach.

Cross-platform Apps and Tools at the Forefront

Yet, Android was given only a secondary role in the keynote, a pattern that arguably began last year. Part of the reason is Google has been releasing new versions of Android earlier in the year than before, giving developers a preview weeks before I/O and then fleshing out details for both developers and users at the event, rather than revealing lots of brand new information. But another big reason is a concession to two realities that have become increasingly apparent over time. First, Google recognizes it’s lost control over the smartphone version of Android, as OEMs and carriers continue to overlay their own apps and services but also slow the spread of new versions. It takes almost two years for new versions of Android to reach half the base. Second, Google also recognizes its ad business can’t depend merely on Android users because a large portion of the total and a majority of the most attractive and valuable users are on other platforms, mostly iOS.

Together, those realities have driven Google to de-emphasize its own mobile operating system as a source of value and competitive differentiation and, instead, to focus on apps and services that exist independently of it. As such, the first hour of Google’s I/O keynote this year was entirely focused on things disconnected from Android, such as the company’s broad investment in AI and machine learning, but also specific applications like the Google Assistant and Google Photos. No transcript is available at the time I’m writing this, but I would wager one of the most frequently repeated phrases during that first hour was “available on Android and iOS” because that felt like the mantra of the morning: broadly available services, not the advantage of using Android. As Carolina pointed out in her piece yesterday, that’s not a stance unique to Google – it was a big theme for Microsoft last week too.

Short Shrift for Android

But for developers who came wanting to hear what’s new with Android, the platform the vast majority of them actually develop for, it must have made for an interesting first 75 minutes or so before Google finally got around to talking about its mobile OS and, even then, not until after talking about YouTube, which has almost zero developer relevance. When it did, Android still got very little attention, with under ten minutes spent on the core smartphone version. Android lead Dave Burke rattled through recent advances in the non-smartphone versions of Android first, including partner adoption of the Wear, Auto, TV, and Things variants, and one brief mention of Chromebooks and ChromeOS.

The user-facing features of Android O feel very much more like catch up than true competitive advantages. In most cases, they’re matching features already available elsewhere or offsetting some of the disadvantages Android has always labored under by being an “open” OS, including better memory management required by its multitasking approach or improved security required by its open approach to apps. From a developer perspective, there were some strong improvements, including better tools for figuring out how apps are performing and how to improve that, support for the Kotlin programming language, and neural network functionality.

A New Emerging Markets Push

Perhaps the most interesting part of the Android presentation was the segment focused on emerging markets, where Android is the dominant platform due to its affordability and in spite of its performance rather than because of it. The reality is Android at this point, stripped of much of its role as a competitive differentiator for Google, has fallen back into the role of expanding the addressable market for Google services. That means optimization for emerging markets.

Android One was a previous effort aimed at both serving those markets better and locking down Android more tightly but it arguably failed in both respects. It’s now having another go with what’s currently called Android Go. This approach seems far more likely to be successful, mostly because it’s truly optimized for these markets and will emphasize not only Google and its OEMs’ roles but those of developers too. That last group is critical for ensuring Android serves emerging market users well and Google is giving them both the incentives and the tools to do better. I love its Building for Billions tagline, which fits with the real purpose of building both devices and apps for the next several billion users, almost all of which will be in these markets.

Everybody Wants a Bite of iOS, Apple remains Mostly Self-Contained

A few hours after publishing this column, Google could be announcing that Google Assistant is going to iOS. Last week, Microsoft announced several new features for Windows 10 Fall Creator Edition, such as Pick Up Where You Left Off and OneDrive Files on Demand, will be available on iOS.

Everybody wants a piece of iOS or better, everybody wants to get to the most valuable consumers out there. You’ve heard this before — Apple customers are very valuable. You only have to look at what they spend on hardware and the growing revenue they drive at the App Store and subscription services to get an idea as to why other ecosystem owners might want to get to them.

Not Having a Horse in the Race makes You Free

When your main source of revenue is not hardware to be device, and, to some extent, platform agnostic, becomes so much easier. For Microsoft and Google, the core business revolves around cloud and advertizing respectively and, while they sell their own devices as well as monetize from their operating systems, they have made the decision to engage with consumers on iOS.

For Microsoft having Office, OneDrive and Cortana available on iOS and partly on Android allows them to reach more users than they would through their PCs alone. Of course, Microsoft has nothing to lose in mobile, as Windows Phone has never been able to get more than single digit market share in the US. Yet, this tactic is not limited to phones. These apps and services are also available on iPad and Mac, segments where Microsoft and its Windows partners have a very strong interest.

Microsoft’s long-term play was described very well at their Build Conference Keynote with the slogan “Windows 10 PCs heart all devices.” I would have gone a step further and said, “Windows 10 heart all devices” but that would not have been very politically correct towards their partners. Whichever slogan you prefer, the idea behind it is spot on. Let users pick what phone they want to use (or tablet, or wearable) but make sure that, if they have one Windows 10 device, their experience across devices is the best one they could have. By getting the best experience as a consumer, you want to continue to stay engaged and you choose services and apps delivered by Microsoft over what comes pre-installed on the phone.

Google has always had a pretty agnostic platform approach when it came to its apps and services. The experience is often better on Android but it does not mean consumers do not get benefits from using apps and services on other platforms and devices. Google Maps and Chrome might be the best example thus far but soon it might well be Google Assistant. While other platforms might limit how deep of an integration assistants such as Google and Cortana might have, they are still delivering some value to the user and they collect valuable information for the provider.

As we move from a mobile-first to a cloud-first and AI-first world, knowing your users so you can better serve them will be key. Google hoped to do that with Android but, unfortunately, despite million and millions of users owning Android-based devices, it did not provide the return Google was hoping for. Users of Android simply do not equate to users of Google services. So, making sure to get to the valuable users is key for this next phase, especially as the bond with the user will be so much tighter than any hardware or single service has been able to provide before.

Hardware as a Means to an End

Selling hardware can be a great source of revenue, as Apple can tell you. For Amazon, Google and, to some extent Microsoft, however, hardware is more a means to an end than a source of revenue any of these companies will ever be able to depend on.

Being able to personify or, in this case, objectify, the vision they have for their services and apps is key. Whether it is a home for Alexa and Google Assistant or a TV for Prime Video or an in-car experience for Google Maps, it is important users experience the best implementation of that end to end vision.

Yet, if your business stability does not depend on it, you are not spending marketing dollars to convince buyers to switch their devices or upgrade them. You are instead focusing on delivering the best value wherever you can. As you move to other hardware, however, you take value away. When there is no value left, the hardware itself will look much less appealing to the most demanding users, increasing the risk of churn. Ben Thomson recently made this very point about Apple in China where iPhone users are so engaged with services from local players the value of Apple is reduced compared to what we could experience here in the US where we might subscribe to Apple Music, use Apple Pay and so on.

Follow the Money

So where does this leave Apple and its hardware-centric business model? Well, if you have been paying attention to recent earnings calls, this leaves Apple pivoting from hardware to services, with revenues reaching their highest value yet at $7 billion. App Store revenue is growing 40% year over year with an installed base of subscribers at 165 million customers and Apple Pay transactions are up 450% over 2016.

For now, it does not look like Apple has much to worry about. Not only are the most valuable customers on iOS and macOS but they are engaged with the services and apps on offer. As the offensive from other players intensifies, however, Apple should look at playing a similar game, even if this means opening up some of its services and apps to other platforms.

Microsoft proudly announced last week that iTunes will be coming to the Windows 10 Store. Many were quick to point out that nobody really uses iTunes anymore but that seems to me a very iOS-centric view. There are still many PC users that use iTunes and they represent an untapped opportunity for Apple Music, a service they might not consider using on their phones but, as part of iTunes on their PC, could look very appealing.

There are stickier services like iMessage or Apple Pay and Siri that could drive engagement through other devices. Think about the ability to iMessage on a PC instead of using Skype. Or the option to create an Apple Pay account that works in other browsers. Or Siri that speaks to you through your appliances.

Finding the right balance between too closed or too open is not easy. We know how open can hurt interoperability but we also know how closed can limit growth. This is not about defending. That can be done by making sure to deliver a superior experience on Apple hardware so that, no matter what other apps and services are available, users will never consider anything but what is pre-installed. It’s rather about making sure no opportunity is left untapped which means to go and get the money to be had.

Digital Assistants Drive New Meta-Platform Battle

In case you hadn’t noticed, the OS platform battle is over.

Oh, and nobody really won, because basically, all the big players did, depending on your perspective. Google has the largest number of people using Android, Apple generates the most income via iOS, and Windows still commands the workplace for Microsoft.

But the stakes are getting much higher for the next looming battle in the tech world. This one will be based around digital assistants, such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana and Google’s Assistant, among others.

While much of the initial focus is, rightfully, around the voice-based computing capabilities of these assistants, I believe we’re going to see these assistants expand into text-driven chatbots, AI-driven autonomous software helpers and, most importantly, de facto digital gateways that end up tieing together a wide range of smart and connected devices.

From smart homes to smart cars, as well as smartphones, PCs and wearables that span both our personal and professional lives, these digital assistants will (ideally) provide the consistent glue that brings together computing, services and much more across many disparate OS platforms. In short, they should be able to make our lives better organized, and our devices and services much easier to use. That’s why these assistants are so strategically important, and why so many other companies—from Facebook to Samsung—are working on their own variations.

Another fascinating aspect of these digital assistants is that they have the potential to completely devalue the underlying platforms on which they run. To put it succinctly, if I can use, say, Alexa across an iPhone, a Windows PC, my smart home components and a future connected car, where does the unique value of iOS or Windows 10 go? Out the door….

This overarching importance and distancing from different platforms is why I refer to these assistants as the pre-eminent example of a “meta-platform”: something that provides the potential for expansion, via both APIs for new software development, and the connectivity of a regular platform, but at a layer “above” a traditional OS.

With that thought in mind, it’s interesting to look at recent data TECHnalysis Research collected as part of a nearly 1,000-person survey of US consumers on usage of digital assistants on smartphones, PCs and, the hottest new entrant, smart speakers such as Amazon’s Alexa and Google Home.

As mentioned earlier, in their present incarnations, these digital assistants are primarily focused on voice-based computing and the kinds of applications that are best-suited for simple voice-driven queries. So, to get a better sense of how these assistants are used, respondents were asked in separate questions how often (or even if) they used digital assistants on smart speakers (such as Amazon Echo), smartphones and PCs. The results were combined into the chart below.

What’s fascinating is that, even though the smart speaker category is relatively new (the Echo is less than 2 years old) and Siri, the first smartphone-based digital assistant, arrived in 2011, it’s clear that people with access to a smart speaker like Echo (around 14% of US households according to the survey results) are using digital assistants significantly more than those with smart phones.

While it’s tempting to suggest that this may be due to the perceived accuracy of the different assistants, in a separate question about accuracy, the rankings for Alexa, Siri and Google’s Assistant were nearly identical, meaning there was no one clear favorite. Instead, these results suggest that a dedicated function device placed in a central location within a home simply invites more usage. Translation: if you want to be relevant in these early stages of the digital assistant battle, you need to have a dedicated smart speaker offering.

Of course, the other challenge is that most people are now increasingly exposed to and use multiple digital assistants from multiple players. In fact, 56% of the respondents acknowledged that they at least occasionally (and some frequently) used multiple assistants, with differing degrees of comfort in making the switch between them. The largest single group, 26%, said they were loyal to and consistently used one assistant and ignored the others, but as competition in this area heats up, those loyalties are likely to be tested.

Digital assistant technology has a long way to go, and their current usage patterns only provide some degree of insight into what their long-term capabilities will be. Nevertheless, it’s clear that the meta-platform battle for digital assistants is going to have a significantly broader and longer-lasting impact than the OS platform battles of yore. That, by itself, will make them essential to watch and understand.

(If you’re interested in learning more about the complete study, please feel free to contact me at bob@technalysisresearch.com.)