Can Web Music Survive?

While no one would argue the Internet’s incredibly positive impact on so many aspects of our lives, it is interesting to see that long-held assumptions about it don’t always ring true.

Take, for instance, the common notion that the web is the ideal distribution platform for all kinds of goods and services, particularly digital media. There’s an entire segment of the world’s economy, in fact, which is arguably based on that hypothesis.

But several decades into the internet revolution, there seem to be several glaring cases where web-centric businesses based on these assumptions aren’t really living up to their potential—at all.

One of the most obvious examples is music. For so many reasons and in so many ways, the distribution of music digitally via the web seems like a match made in heaven. Music plays an important role in most people’s lives—an extremely important role for some consumers—so there’s strong built-in demand, and the small size of digitized music files makes them seemingly easy to transfer via the enormous range of routes over which we now have access to the internet.

And yet, here we are in 2016 with more news about the struggles of online music businesses than success stories to share. Market leaders Spotify and Pandora continue to lose money, as do players such as SoundCloud, and smaller services like Rdio and now MixRadio close on a frequent basis. Even the very biggest names in tech—including Apple and Google—have struggled to find a lasting, profitable business model for their large investments in digital music.

For a long time, of course, Apple had great success with iTunes. So much so, in fact, that they changed the nature of the music business. Unfortunately, that success also brought with it an entirely new, and more dour perspective from the traditional music owners—large music labels—that’s making new business ventures in music significantly more challenging.

Equally important, tastes in digital music consumption evolved from buying and downloading songs to streaming them. Consumers have become captivated by the option of getting access to an enormous range of musical choices, particularly in conjunction with the unique music discovery and social sharing capabilities that these services offer.

But streaming services don’t seem to be the ultimate solution either. Most are ad-based and struggle with converting free customers to paid ones. In addition, there’s growing resentment in the music industry about the royalty payments made to musicians from these services. In fact, at this week’s Grammy awards, there was an impassioned plea from the music industry about the inequity of receiving tiny fraction-of-a-cent payouts for streaming music.[pullquote]While people acknowledge that there’s value in content, paying for that content alone doesn’t seem to be a viable way of doing business long-term. .[/pullquote]

The problem is, despite these concerns about payouts to the music industry, online music companies still have to invest significant money in order to get access to new music. Perhaps to no one’s surprise, the real issue seems to be in how that money is being distributed.

The other challenge is one that seems to be similar to many other web-based media properties. While people acknowledge that there’s value in content, paying for that content alone doesn’t seem to be a viable way of doing business long-term. In the case of music sites, because they can’t seem to make money selling the music itself, they’re hoping to do so selling tickets to concerts, as well as artist’s t-shirts and other promotional items. It’s not likely to lead to gangbuster profits, but this more indirect model may at least lead to businesses that can survive.

Longer term, however, there’s going to have to be some serious soul-searching and re-examination of long-held assumptions about internet business models, because they’re clearly not all spun from the gold of which many believe the web is made.

Podcast: Twitter, Wireless Connectivity, Sony VR

This week Bob O’Donnell, Tim Bajarin and Jan Dawson discuss the recent earnings from Twitter, describe some of the new wireless connectivity enhancements for WiFi and LTE, and debate the potential for consumer virtual reality products like Sony’s forthcoming Playstation VR offering.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Growing Choices in Wireless Connectivity

Sometimes in order to see the big picture, you have to start with a deep dive.

At a recent two-day workshop on connectivity hosted by modem and radio chipmaker Qualcomm, I was bombarded with technical minutiae on everything from the role of filters in the RF front end of a modern modem, to the key elements of 3GPP Release 13, and the usage of carrier aggregation-like functions in upcoming technologies that leverage unlicensed 5 GHz spectrum.

What really hit me about the discussion, though, was how many different ways are now available to wirelessly connect—and how many more are still to come. In addition to the more common forms of WiFi and LTE, there is a tremendous range of new varieties of both standards, either already in place or being developed. These additions are adapting to and adjusting for the real-world limitations that earlier iterations of these technologies still have, and will help us fill in the gaps of our current coverage. Put simply, it’s Connectivity 2.0 (or 5.0 or whatever number you choose to assign to this technology maturation process).

In these days of 4K video streaming and our seemingly insatiable thirst for wireless broadband connections, that’s important. Connectivity has become the lifeblood for our devices—as essential to them as water is to us—and the need to have faster, more consistent connections is only going to grow.[pullquote] In addition to the more common forms of WiFi and LTE, there is a tremendous range of new varieties of both standards, either already in place or being developed.”[/pullquote]

In the case of WiFi, the next standard we have to look forward to is 802.11ad. Think of it as the firehose of WiFi—it can’t deliver water very far, but within that confined area, it delivers the water fast—really fast. The 802.11ad standard uses radio waves at 60 GHz to communicate—much different than the typical 2.4 or 5 GHz used by other versions of WiFi—and by doing so it can deliver speeds as fast as most wired network connections (5 Gbps), but you’re limited to being in the same room as the router/access point that’s sending out those signals.

Though the standards committees are still finalizing details, fierce competitors Intel and Qualcomm just publicly demonstrated compatibility between their two offerings last week, ensuring we’ll see the first 802.11ad-equipped products later this year.

Another forthcoming WiFi improvement that’s a bit further out (think 1-2 years) is 802.11ax (don’t even get me started on these crazy naming conventions…), which you might want to think of as the sprinkler system of WiFi.

We’ve all been to conventions, concerts, sporting events and other large venues that, while they technically offer WiFi, don’t exactly offer a great experience to everyone. Sometimes you connect, sometimes you don’t, but the speed is never great. The goal of 802.11ax is to deliver consistent quality connections and speeds in these congested environments. as well as places like multi-unit housing complexes, shopping malls, etc.

We are also starting to see efforts to extend LTE for applications like these. Key suppliers to the telecommunications industry are making an effort to use what is called unlicensed spectrum—that is, radio bands that are not specifically purchased and used by telco carriers for their own networks—to carry broadband data and, equally important, not interfere with existing WiFi traffic. Qualcomm is working with a variety of other major players including Nokia, Ericsson and Intel, on something they’ve dubbed MulteFire, which they hope will bring LTE-like performance with WiFi-like simplicity, into the mainstream over the next few years. These companies are expected to make more announcements at the upcoming Mobile World Congress trade show in Barcelona, Spain.

Barcelona will also be the site of more news on the granddaddy of all connectivity developments—5G. Though real-world implementations probably won’t happen in the US until about 2020, many developments from test beds, to radio technologies, to infrastructure elements to applications are expected to be announced at the show. 5G is being specifically designed to handle extreme variations in waveform frequencies—from the low MHz to millimeter wave 50 GHz plus—as well as enormous ranges in power consumption, all with the hope of covering every application from low-power IOT to enormous, real-time data transfers.

Keeping track of all these new connectivity options certainly won’t be easy, and getting access to them will require buying devices that specifically support the new standards. The range of options we can look forward to is impressive, however, and will help wireless connectivity become an even more ubiquitous and reliable part of our everyday lives.

Podcast: Virtual Reality, GoPro and Alphabet Earnings

This week Bob O’Donnell, Ben Bajarin and Jan Dawson debate the opportunities and challenges for virtual reality, and discuss the recent earnings results from GoPro and Google’s parent company Alphabet.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

What if Twitter Died?

Scrolling through my Twitter feed, I had an interesting thought the other day. What if Twitter just didn’t exist anymore?

Given all the recent troubles the company’s been going through, it’s no longer a completely unreasonable scenario.

Yet, for many in the tech industry, I have to imagine that borders on the unthinkable. Not because the practical realities of the business fading aren’t a possibility, but because so much of their lives are caught up in the Twittersphere. For some, it would almost be like losing a leg—or worse. I mean, there are people and, indeed, entire companies whose very existence and value seems to be directly tied to their level of influence on Twitter—and not much else.

To its credit, Twitter has managed to create more than just another social network. The micro-blogging service has morphed into something many people seem to have nearly built their life around. But for all of its attraction and pull, it can also be an incredibly addictive time suck that regularly draws people into minutes (or even hours) of distractions. Sometimes leading to useful discoveries but more often than not, feeling like a waste of time.

To be fair, Twitter is an extremely valuable service for discovering news in real time, finding out what people have to say, and, as a writer, promoting what I and my friends and colleagues have written or participated in to a wider audience.

But nearly ten years in, the service has also become a shouting gallery for “traditional” celebrities and a lot of people in the tech industry who somehow believe Twitter has made them celebrities.

Harsh? Sure. Reality? I think so.

In fact, this seems to be one of the fundamental problems of Twitter. It’s appealing to Hollywood, TV, music and sports celebrities as a means to interact more intimately with their fans and share the kinds of details they’d never provide to traditional celebrity media. It’s appealing to the tech industry as a mouthpiece for those who want to determine the course of what is or isn’t important. The digital taste-setters, so to speak.

But for mainstream business and consumer users? Not so much. Arguably, this is the biggest problem with Twitter—it can’t seem to stretch beyond its celebrity, celebrity follower, and tech roots. If you aren’t into celebrities or the tech industry, Twitter just isn’t that appealing, especially given all the other options for online social interactions.[pullquote]If you aren’t into celebrities or the tech industry, Twitter just isn’t that appealing, especially given all the other options for online social interactions.”[/pullquote]

Despite these points, I think the navel gazing value of Twitter to the tech industry is so high, I seriously doubt they’ll let Twitter actually die. Someone with enough money and enough self-interest will likely make sure that, no matter what, Twitter will continue in some shape or form. Eventually, it’s value may start to fade, as some have already started to argue, but at least the Twittersphere will have a few years to adapt and find new alternatives.

The fundamental challenge is a publishing service that’s essentially based on self-promotion, self-aggrandizement, and self-importance at some point is going to run into the wall of indifference. Not everyone cares to read about what the self-elected are all doing all the time.

Real time publishing, real time interactions, and real time discovery are all incredibly important capabilities, especially in today’s split second society. But there is an increasingly wide range of alternatives for people to leverage and it’s not entirely clear to me that Twitter has all the tools it needs to weather the current climate.

As a reasonably long time, regular user of Twitter, I would be sad to see it go, but that doesn’t mean I can’t imagine life without it. I can and, increasingly, it seems many others are starting to see that potential too.

Podcast: Kantar Smartphones; Apple, Microsoft, Facebook and Amazon Earnings; FCC Set Top Box Rules

This week Bob O’Donnell, Ben Bajarin, Jan Dawson and special guest Carolina Milanesi of Kantar WorldPanel discuss Kantar’s latest smartphone data, analyze the earnings reports from Apple, Microsoft, Facebook and Amazon, and debate the potential impact of the FCC’s efforts to open up standards around set-top boxes.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Smart Home Safety Evolution: Physical to Digital

One of the greatest yet most easily overlooked benefits of modern technology and modern science is the ability it gives us to both literally and metaphorically “see” things that were not visible or understood before. Unfortunately, when it comes to smart homes, home networks, and data connectivity in general, we’re still essentially blind to what’s actually going on.

Sure, with a bit of effort, you can find out how much of your monthly data plan you’ve used in a given month, but figuring out what kind of data packets are traversing your home network and where they’re coming from? Fuhgetaboudit.

Truth be told, our collective naiveté on the subject was acceptable in simpler, digitally safer times. After all, network packet analysis hasn’t likely been very high on most consumers’ to-do lists and thankfully, there wasn’t that much to worry about.

Now, however, things are different — much different.

The horror stories about the kinds of gaping security holes that many IoT and smart home products are opening people up to are just getting started. Unfortunately, they’re bound to get worse before they get better—and that may be a while. I understand the righteous indignation generated by government-driven surveillance programs, but why isn’t there even greater outrage at efforts being made to parlay our devices into digital zombies hell bent on taking down web sites through DDOS (distributed denial of service) attacks, or tapping into home security cameras to watch sleeping babies?

Elon Musk and Stephen Hawking may be concerned about the future of autonomous robots and their impact on humanity, but I’d really like to know what kind of potentially nefarious data from places unknown is zipping around my home network as I write.

I have to admit I find it particularly ironic that many of the strongest proponents of smart home products focus on the security benefits that connected video cameras and other related devices offer. In fact, nearly 75% of smart home products sold are related to security according to market research firm NPD, but they’re 100% focused on physical security.

Now, don’t get me wrong. Physical security is incredibly important, but it’s high time to start placing serious (and perhaps equal) emphasis on digital and cyber security. Wouldn’t you want to be notified if you started receiving or spitting out a flood of unrequested traffic from or to another country on your home network in the same way that you receive a notification if your connected video camera sees your dog moving? Even better, wouldn’t you want tools that can start automatically blocking that traffic for you and notify you they’re doing so?[pullquote]Physical security is incredibly important, but it’s high time to start placing serious (and perhaps equal) emphasis on digital and cyber security.”[/pullquote]

Even beyond obvious examples like that, why can’t I get more insight into what’s actually going on inside my network—where data is coming from and going, to which device, etc? While we’re at it, how is it that, in 2016, I still can’t tell (without a lot of effort) exactly which outlet and which device(s) in my home are the ones sapping up the most “vampire” power?

Part of the challenge, of course, is figuring out how to deliver this kind of information in a way normal human beings can understand. Making sense of network traffic isn’t something people want to have to think about but we can no longer afford to ignore it. Clever software designers and smart engineers should be able to tackle this challenge and create a way to make the invisible realities of our increasingly digital world more tangible, relatable, intelligible, and concrete.

I see this as one of several current challenges. Apply analytics to our personal big data—the data, environment, and events that happen all around us. From health care to finance to transportation to communications, there’s a staggering amount of opportunities to try and organize and make sense of this contextual information in a way that’s actually meaningful to our lives.

The first step along this path, arguably, should be in the home, our place of safety and refuge. But, while the concept of a connected home may be interesting, until it can keep the occupants safe from both physical and cyber threats, I don’t see how we can justify calling it smart.

Podcast: End of Year and Holiday Sales Results for Tech Products

This week Bob O’Donnell, Ben Bajarin, and special guest Steve Baker of NPD discuss the final holiday and end-of-year sales numbers for consumer devices in US retail and online stores, and debate opportunities for new categories in 2016.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Promise and Confusion of USB Type-C

Okay, I’ll admit it. It’s not exactly the sexiest topic in the world.

But, when it comes to the practical, day-to-day existence with all of our modern devices, connectivity is an important story. When you survey the landscape of connectivity topics, it’s hard to ignore the impact various types of USB have had. Sure, the multiple new wireless standards tend to get a lot more attention. However, for most people, wired connections between devices are still an extremely common means of making things work and no wired connection is more ubiquitous than USB. (Except for power, but we’ll get to that in a second.)

The latest iteration of the USB connector is called Type-C, and while it was officially introduced in 2014, it’s really just starting to appear on the devices we can buy and use. Apple’s 2015 MacBook was among the first to support the new connector but it’s now showing up on all kinds of Windows PCs, smartphones, monitors, docking stations, storage peripherals and more. Like Apple’s Lightning connector, the USB Type-C connector is reversible, meaning you can plug it in in any orientation and it will work (and won’t get jammed in the wrong way).

USB Type-C is also associated with, though officially different from, USB version 3.1, which is currently the highest speed iteration of the standard. It supports transfer rates of 10 Gb/sec, a nearly 1,000x improvement over the 1996-era USB 1.0 spec, which topped out at 12 Mb/sec.

Equally important, USB Type-C supports several alternate modes, most notably the ability to carry up to 100W of power over the line, as well as the ability to drive up to two 4K displays at a 60Hz refresh rate. Best of all, it can do this simultaneously with data transfer, allowing a single connector to theoretically deliver power, data and video over a single line. Truly, this should be the one cable to rule them all.

As we all know, however, there’s often a big difference between theory and practice. The crux of the problem is that not all USB Type-C connectors support all of these different capabilities and, with one important exception, it’s almost impossible for an average person to figure out what a given USB Type-C equipped device supports without doing a good deal of research.

The key exception is for Thunderbolt 3.0, a technology originally developed by Intel. It’s a different interface standard than USB 3.1, but uses the same USB Type-C connectors. Thunderbolt 3.0 connectors (which, by the way, are different than previous versions of Thunderbolt—versions 1 and 2 used the same connectors as the mini-DisplayPort video standard) are marked by a lightning bolt next to the connector, making them easy for almost anyone to identify. To be clear however, they aren’t the same as the somewhat similarly shaped Lightning connectors used by Apple (which, ironically, don’t have a lightning bolt next to them). Confused? You’re not alone.

Arguably, Thunderbolt 3.0 is essentially a superset of USB 3.1, as it can carry full USB 3.1 signals at 10 Gb/sec, as well as PCIe 3.0, HDMI 2.0 or DisplayPort 1.2 video signals, 100W of power, and Thunderbolt data connections at up to 40 Gb/sec, all over a single USB Type-C connection. The only downside to Thunderbolt 3 is that it requires a dedicated Thunderbolt controller chip in any device that supports it, which adds cost. Also, full-bandwidth Thunderbolt 3 cables can be expensive, because they require active electronics inside them.

Standard USB Type-C, on the other hand, can be implemented by device makers a bit less expensively and full bandwidth cables, while also active, tend to be cheaper than Thunderbolt versions. However, along with this cost decrease comes the opportunity for confusion. Just because a device has USB Type-C connectors does not mean it supports power or any other alternate mode, such as support for video standards DisplayPort or MHL (used on some smartphones to drive larger displays). In fact, technically, it’s even possible to have USB Type-C ports that don’t support USB 3.1, although in reality, that’s highly unlikely to ever occur.

The real problem is there are no simple means of demarcation or labelling for different varieties of USB Type-C. One of the goals of the standard was to produce a much smaller connector that would fit on smaller devices—leaving little room for any type of icon.[pullquote]The real problem is there are no simple means of demarcation or labelling for different varieties of USB Type-C.”[/pullquote]

The other issue is, with the launch of USB Type-C, we’re seeing one of the first iterations of what I would call “virtualization” of the port. Until recently, each port had its own connector and carried its own type of signal. USB carried data to peripherals, Ethernet handled networking, video connectors such as HDMI and DisplayPort carried video, etc. Now the rise of multipurpose ports such as USB Type-C have broken that 1:1 correlation between ports and functions. While this consolidation is clearly an important technical step forward, it also points out the opportunity for confusion if user education and basic labelling techniques are overlooked.

On the bright side, this “virtualization” of ports will lead to a wide variety of the most useful docking stations and port replicators we’ve ever seen, particularly for notebook PCs, tablets, and even smartphones. You’ll be able to plug one cable into your device and get access to every single port you can imagine, as well as providing power back to the device. We’ll also start to see new types of peripherals, such as single cable monitors act as hubs to other devices, receiving power and video from the host device, while also enabling the connection of speakers, USB storage, and even a second daisy-chained monitor.

Eventually, most of these connections will likely become wireless but, given the need for power and the expected challenges around delivering wireless power to many devices, it’s clear variations on USB Type-C, particularly Thunderbolt 3.0 and later iterations, will be around for some time to come.

The proliferation of USB Type-C clearly marks the dawn of a great new era of connectivity for our devices, but it may require a bit of homework on your part to fully enjoy it.

Podcast: GoPro, Apple and Time-Warner, PC Market

This week Tim Bajarin, Ben Bajarin, Jan Dawson and Bob O’Donnell discuss the recent travails for GoPro, analyze the rumors around Apple purchasing Time-Warner, and debate the opportunities for the PC market moving forward.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Hottest Computing Device? Cars

I’m not sure how else to describe it.

Imagine a device that features several computer subsystems, at least one of which offers supercomputer-level performance, the highest end GPUs, multiple large size, high-resolution displays, surround sound-enabled high resolution audio, integrated 4G connectivity, multiple function-specific controllers, voice and touch-based modes of input, a dedicated connectivity bus for internal communications between subsystems, millimeter-level location and mapping accuracy, multiple high-resolution camera inputs, and a high-level operating system that can even virtualize guest OS’s.

Even better, imagine this computing capability—which is well beyond anything an individual can purchase or use today—is essentially “subsidized” into the price of the device.

Sound good?

Well, that’s essentially what the cars of the near future (not quite present) will be. The ultimate personal computing device, wrapped into the sexy, sleek package of an automobile.

Walking around the CES show in Las Vegas last week and meeting with car makers and component suppliers, one couldn’t help but get the sense that, yes, the future of computing is mobile, but this is mobility on a whole other scale.[pullquote]With automotive computing, this is mobility on a whole other scale.”[/pullquote]

Throw in the growth of parking sensors, traffic monitors, and other metropolitan-focused IoT infrastructure elements, and you can imagine cars as the “client” in the network of a smart city. That’s network and cloud-based computing at a whole other level as well.

As a long-time observer of computer-based devices, from PCs to smartphones to wearables and more, it’s fascinating to see how much and how quickly the latest core component technologies from these devices are making their way into automobiles.

Of course, as a long-time tech industry observer, I have also found it interesting to think of the strange bedfellows new car technologies could create. Ford’s new Sync3 platform, for example, is based on a high-level OS made by Blackberry-owned QNX, but offers the ability to essentially “virtualize” the projected automobile platforms of both Apple (CarPlay) and Google (Android Auto). The QNX host OS remains the primary platform but this architecture lets car buyers choose to use either (or neither) of the two mobile platforms within the context of the overall Ford branded in-car experience. Smart.

What’s also interesting to observe is how compelling a use case cars represent for computing power. At nVidia’s CES press conference for example, the company introduced their Drive PX2 platform, a next generation Tegra-powered supercomputer on a board, which is specifically focused on autonomous driving and ADAS (Advanced Driver Assistance Systems). The company went into great detail to illustrate just how complex the process of integrating all the various camera, lidar, and sensor inputs an autonomous car would need and provided the best justification I’ve seen as to why companies are working on delivering that much computing capability for personal computing devices. Of course, it also confirmed my suspicions that truly autonomous cars are still a long way off. In the near term however, the ability to leverage that kind of power for ADAS applications means it has relevance for today’s auto designers.

The show also allowed mobile powerhouse Qualcomm to unveil its first major in-car infotainment design via a new deal with Audi. It’s interesting to note that, in a number of designs, the infotainment system is separate from the ADAS and/or autonomous driving features, with a fully secured gateway fixed in between them. Yet another example of how traditional computer technologies are being integrated into today’s cars. Of course, from a computer perspective, this separation also means there are multiple subsystems running inside the automobile, further highlighting how much technology some of these cars will soon have.

Part of the reason for all of this technology inclusion stems from a point I made at the beginning. With price points measured in the tens of thousands, it’s a lot easier to integrate a wider variety of more expensive components into car prices than to create a standalone computing device with the same capabilities. Nevertheless, it’s becoming clear that, if you want to get access to the most powerful personal computing device possible, it may be the one with wheels.

Top Tech Predictions for 2016, Part 2

In last week’s column I offered five tech predictions for the new year and, in this week’s column, I finish my Top 10 Predictions list with five more.

Prediction 6: Wearables Make An Impact…in Business

Wearables were one of the hottest topics going into 2015 and, while they certainly made an impact this past year, they didn’t exactly change the world. The Apple Watch in particular had reasonable success but still lags behind FitBit’s wearables from a market share perspective—definitely not the outcome many had predicted at the beginning of 2015.

Part of the challenge is the vast majority of wearables are seen as accessories for fitness enthusiasts, not essential devices for mainstream consumers. In addition, the questionable accuracy and limited capabilities of some of the early devices (and the sensors built into them) have led many people to question their long-term value. Toss in the numerous anecdotal stories about people giving up on their wearables after only a few weeks of use and you have the perfect storm of factors to limit the impact of this category.

In order to reach a wider audience, wearables must have a more compelling value equation to attract and hold onto a wider range of people. Instead of the consumer market, I believe there’s a better opportunity to achieve this in the business world. Wearable devices could prove to be an ideal workplace enhancement that could not only replace building security cards with a more secure, biometric form of authentication, but also serve as the means to log into your work devices, secure websites, and more. The savings that could be generated just by eliminating the IT costs associated with resetting passwords alone can easily justify the necessary infrastructure expense to enable this (not to mention the greatly increased security benefits that come with it).

On top of that, some businesses are starting conversations with healthcare organizations to collectively track the activity level and health of their employees in order to offer better insurance rates. Yes, there are some potentially scary big brother abuses that could be possible here, but a well-implemented program could be a big win all around. Plus, it would provide yet another justification to do widespread deployments of wearables in the workplace.

Prediction 7: First Products with Foldable Displays

As a long time display industry follower, I’ve been tracking their technology developments for nearly two decades. What I’ve learned is core display technologies can have an incredibly important impact on the devices that deploy them—think of the high-resolution displays we’ve become accustomed to on our phones or the ultra high-resolution displays driving today’s 4K TVs. They are one of the key defining factors for successful devices.

The most exciting development going on in displays right now is the effort to create foldable or bendable displays. Truth be told, there have been prototypes of these technologies at display trade shows for over decade but, as with many things in the display industry, it’s much easier to build single prototypes than it is to mass produce them. Nevertheless, it looks like 2016 will be the year when we start to see the first examples of foldable/bendable displays in real products (or at least, finished product prototypes). Along with these displays will come some of the most dramatic changes in form factor any of us have ever seen. A tablet that turns into a smartphone or vice versa? The possibilities are tantalizing.

The first versions of foldable displays will likely only be able to fold outwards, meaning you could take a flat display and end up with displays on the outside of the fold. The reason for this is it’s apparently easier to stretch the materials at the fold then it is to squeeze them together, as you would need to do for an inward-folding display. That will limit product designs to some degree, but expect the evolution of foldable displays to start making the lines separating product categories even less meaningful than they’ve already started to become.

Prediction 8: The Biggest Innovation in IOT Will Be Business Models

The world of IoT has been interesting to watch and it’s relatively straightforward to imagine 2016 will be a key year for it. However, when you start to dig into the actual technologies used to drive the Internet of Things, you realize it’s actually pretty simple and, in some cases, pretty old stuff. Basically, we’re talking about using low-power radios to connect together a bunch of devices powered by low-power CPUs or even embedded microcontrollers with some simple sensors. The magic, of course, is in the software and what you can do with the data these connected devices generate.

Even there, however, the analysis is typically straightforward and sometimes falls into what I call a “one and done” mode, where an insight is made and all you need to do is monitor the data and react accordingly. To make the results meaningful, you often have to scale the deployment to a very large degree and that ends up requiring significant capital investment.

This is where business model innovation will start to kick in because, for many organizations, the capital expenditures for large IoT deployments either don’t really have a great ROI (Return on Investment) story or, even if they do, they’re just too large to justify versus other pressing projects. That’s why the biggest innovations in IoT won’t be on the technology side in 2016, but in how companies piece together solutions that make deploying IoT a win-win for all sides. Right now, too many of the big IoT concepts (smart cities, anyone?) are really just technology for technology’s sake. While they might sound cool in theory, without a clear business value, they’ll end up staying hypothetical talking points instead of driving real-world benefits.[pullquote]Right now, too many of the big IoT concepts (smart cities, anyone?) are really just technology for technology’s sake.”[/pullquote]

Prediction 9: Connected Homes Will Continue to Underwhelm

While it’s at a much different level conceptually, some of the exact same issues will also keep the connected home market from reaching its full potential in 2016 as well. Yes, we’re starting to see a few more interesting products but, for most consumers, the clear value equation for a connected home just isn’t there. Admittedly, the idea of my lights turning on automatically and the thermostat automatically adjusting the temperature of my house based on when my car pulls into the driveway is cool (especially when you first install it), but really, is it that critical to my life, particularly a year or two later? For the vast majority of people, the answer is a simple no. In fact, after a while, a few of them feel pretty “gimmicky.” If you can afford them, lots of smart home products are nice to have, but they’re definitely not in the “need to have” category.

Additionally, 2016 will, unfortunately, likely be a year when stories about home hackings and other security-related issues become commonplace. For example, if you put a security camera on a network in your house, the ability for you to view it also opens up the possibility for others to do so. The benefit/privacy tradeoff for smart/connected home products is a question consumers are going to be wrestling with for some time.

Finally, on a practical level, the ongoing standards battle at multiple levels of the home networking “stack” are going to make the process of putting together solutions of connected home products very difficult for even technically savvy consumers. Knowing whether or not one company’s products are going to work with another’s and whether or not I have to use multiple different applications to control it all (and from what devices) is not going to be simple. Unfortunately, that potential for confusion will likely limit market acceptance for much of the rest of this decade. Yes, I think it’s that bad.

Prediction 10: VR Stalls But AR Makes an Impact

If there’s any technology that’s been overhyped for a long time, it’s virtual reality. Heck, I remember reading in the 1990s how VR was going to dramatically change our lives in the near future. Well, here we are in 2016, and it’s yet to really have a big impact. Yes, we’ll see some interesting new production introductions in the world of VR this year, but nothing I’ve seen suggests it will grow to be much more than a niche, primarily for gaming. Now, gaming continues to be a growing market, so there’s still money to be made here, but the idea that VR will be as widely adopted as even wearables does not seem likely in 2016.

I also expect augmented reality products like the Microsoft HoloLens to have a pretty modest impact this year, especially given the expected high price points for these types of products. However, longer term, I believe AR offers the potential for entirely new means of interacting with digital data in a way that will appeal to a much wider audience than the closed-loop world of VR ever will. In a sense, AR is essentially a new display method for computing and, just as everyone who computes leverages a display (or often multiple displays) of some kind, I can foresee a day when everyone who computes could leverage an AR-type of display.

Now, some may argue you could make the same case for VR and, in a sense, you can. However, every display and technology advancement we’ve enjoyed over the last several decades has been done within the context of the real world around us. AR seems like a much more logical step in that evolution than VR for the vast majority of people.

Of course, near term, even AR is likely to be limited to specific professionals or wealthy consumers who want to have the option of using an alternative display for a portion of their computing time. But of all the technologies I’ve seen over the last few years, augmented reality devices offer the most compelling vision of our computing and device future that I’ve ever come across. I, for one, can’t wait to see how they move us all forward.

Top Tech Predictions for 2016

Another year has come and gone, and in the tech world, it seems not much has changed. 2015 was arguably a relatively modest year when it comes to major innovations, with many of the biggest developments essentially coming as final delivery or extensions to bigger trends that started or were first announced in 2014. Autonomous cars, smart homes, wearables, virtual reality, drones, Windows 10, large-screen smartphones, and the sharing economy all made a bigger initial mark in 2014 and continued to evolve over this past year.

Looking ahead to 2016, I expect we will see changes that, on the surface, also don’t seem to amount to much initially, but will actually prove to be key foundational shifts that drive a very different, and very exciting future. Here are the first five of my predictions of key themes for the new year (the next five will appear in next week’s column.)

Prediction 1: The Death of Software Platforms, The Rise of the MetaOS

Proprietary software platforms like iOS, Windows, and Android have served as the very backbone of the tech industry and the tech economy for quite some time, so it may seem a bit ludicrous to predict their demise. However, I believe the walls supporting these ecosystems are starting to crumble. Device operating systems were built to enable the creation of applications that worked on specific devices, and they did an incredible job—perhaps too good—of doing just that. We now have somewhere between 1.5 and 2 million apps available each for iOS and Android and hundreds of thousands of Windows apps. The problem is, the vast majority of people download less than a hundred and actually use more like 5-10 apps on a regular basis.

More importantly, most consumers now own and regularly use multiple devices with multiple operating systems and what they really want isn’t a bunch of independent apps, but access to the critical services that they access through their devices. Yes, some of those services are delivered through apps, but many of the biggest software and service providers are altering their strategies to ensure that they can deliver a high quality experience regardless of the app, device, OS, or browser being used to access their application or service. Factor in the increasing range of smart home, smart car, and other connected devices we’ll all own and regularly use in the near future—plus the general app fatigue that I think many consumers now feel—and the whole argument around an app-driven world starts to make a lot less sense.

Instead, from Facebook to Microsoft to DropBox and hundreds of other cloud service providers, we’re seeing companies build what I call a MetaOS—a platform-like layer of software and services that remains independent of any underlying device platform to deliver the critical capabilities that people are ultimately looking to access. Bigger companies like Facebook and Microsoft are integrating a wide range of services into these MetaOS platforms—particularly around communications and contextual intelligence agents—that will increasingly take on the tasks and roles that other individual applications used to. Want access to media content or documents or (eventually) commerce and financial services? Even better, want a smart assistant to help coordinate your efforts? Log into one of these MetaOS megaservices and your unique digital identity (another key element of a MetaOS) will give you secure access to these services and much more.

Look for Google, Apple, and Amazon, among others, to start making a bigger effort in this area, and expect to see some of these larger companies make key acquisitions to fill in gaps in their MetaOS efforts over the course of the next year. This isn’t something that’s going to happen overnight, but I think 2016 will be the year we start to see more of these strategies take shape.

Prediction 2: Market Maturation Leads to Increased Specialization

The era of products that appeal to a broad, cross-section of all consumers is coming to an end and it’s being replaced by a new era where we will see more products that are more tightly focused on specific sets of customers. The key product categories have matured, and it’s hard to find broad new product categories that appeal to a wide range of consumers in the same way that PCs, tablets, and smartphones have.

That’s not to say that we won’t be seeing any exciting or interesting new product categories—after all, something has to be next year’s hoverboard—but they won’t have the same kind of wide-ranging impact that the now more “traditional” smart devices have had. As a result, I think we’ll see a wide variety of sub-categories for smart homes, connected cars, wearables, drones, VR headsets, and consumer robotics that will perhaps sell in the tens or hundreds of thousands instead of the tens of millions that other product categories have enjoyed. The Maker Movement and crowd-funding efforts will go a long way towards helping drive these changes, but I also expect that we’ll see the China/Shenzen hardware ecosystem start to adjust and focus more efforts on being able to specialize and even personalize devices. The end result will be a wider range of devices that more specifically meet different consumers’ needs. At the same time, I believe it will also be harder to “find the pulse” of where major hardware developments are headed, because they will be moving in so many different directions. The key will be in developing manufacturing technologies that can enable greater abilities to specialize and that can produce products profitably with lower production runs.

Prediction 3: Apple Reality Check Leads to Major Investment

Apple has had an incredible run at the top of the technology heap for quite some time and, to be clear, I’m not saying that 2016 is the year this will end. What I am saying, however, is that 2016 is the year the company will face some of its biggest challenges, and the year that the “reality distortion field” surrounding the company will start to fade. With two-thirds of its revenues dependent on a single product line (the iPhone) that’s running into the realities of a slowing global smartphone market, the company is going to have to make some big new bets in 2016 in order to retain its market-leading position. I’m not exactly sure what those bets might be (augmented/virtual reality, financial services, automotive, enterprise software, media, or some combination of all of the above), but I’m convinced there are a great deal of very smart people at Apple who are undoubtedly thinking through what’s next for them. Maintaining the status quo in 2016 doesn’t seem like a great option, so this should be the year they seriously tap into that massive cash reserve of theirs and make some major, game-changing acquisitions.

Prediction 4: The Great Hardware Stall Forces Shift to Software and Services

As most companies besides Apple have already learned, it’s very hard to make money on hardware alone, and those problems will only be exacerbated in 2016. With expected declines in tablets and PCs, the flattening of the smartphone market and only modest overall uptake for wearables and other new hardware categories, we’re nearing the end of a several decade-long run of hardware growth. We’ll see pockets of opportunity to be sure—see Prediction 2 above—but companies who have been primarily or even solely dependent on hardware sales are going to have to make some difficult decisions on how they evolve in the era of software and services. As a result, I expect to see more major acquisitions such as the recent Dell/EMC/VMware deal. The challenge, of course, is that many hardware-focused organizations don’t have the in-house skill sets or mindsets to make this transition, so I expect we’ll see very challenging times for some hardware-focused companies in 2016.

Another potential impact from this hardware stall could be an increased desire for hardware companies to become more vertically oriented in order to maximize their opportunity in a shrinking profit pool. This could lead either to acquisitions of key semiconductor vendors and other core component providers by device makers, or vice versa, but either way, hardware-focused companies are going to have to focus on maximizing profitability through reduced costs. After decades of widening the supply chain horizontally, it seems the pendulum is definitely swinging back towards vertical integration.

Prediction 5: Autonomous Car Hype Overshadows Driver Assistance Improvements

The technological advancements in automobiles have been impressive over the last year or two, with the idea of a connected car, and even a partially automated car, quickly moving from science fiction to everyday reality. However, there are still a number of major legislative, social, and technology challenges that need to be overcome before our roadways are filled with self-driving cars. The real advancements that are starting to take place in advanced driver assistance systems (ADAS), such as lane departure warnings, automatic braking, more sophisticated cruise controls, etc., offer some very beneficial safety benefits. But they’re not as sexy as autonomous driving, so much of the press seems to be overlooking them. Even the car vendors seem to be focused more on delivering their vision of autonomous driving than on what we’ll be able to actually purchase and drive over the next five years. In reality, they’re showing the modern-day version of concept cars instead of production cars, but that point is being missed by many. Remember that, unlike the tech industry, the automotive industry regularly builds and displays products it has little or no intention of ever releasing to the world at large.

Improvements in car electronics and intelligence are happening at an impressive pace, and the quality of our in-car experiences is going to change dramatically over the next several years. It’s important to put all the advancements in context, however, and recognize that they’re not all going to occur at the same time. We’re really just now starting to get high-quality connectivity into the latest generation cars, and there are many improvements that we can expect to see in infotainment systems (with or without Apple and Google’s help) over the next few years. As we learned this past year, there are still critical security implications just from those changes, and they won’t all be easily resolved overnight.

Eventually, we will get to truly autonomous cars that regular people can actually buy, but it’s important to understand and appreciate the step-by-step advancements that are being made along the way. These advancements may not be as revolutionary as driverless cars, but they are the news that the automotive industry can realistically deliver on over the next 12 months. Unfortunately, I think the message is going to be lost in the noise of “autonomous automania” this year, leading to thoroughly confused consumers and unrealistic expectations.

Next week I’ll finish off my 2016 predictions with five more on wearables, foldable displays, IOT, connected homes, and VR/AR. In the meantime, have a Happy New Year!

The Smartphone Lifetime Challenge

This problem isn’t a new one for technology products, but it is for smartphones.

After a long, powerful run, the smartphone market is starting to peak. In the US, the market is likely to be flat or even modestly down either this year or next. In the fast-growing China market, they’ve already experienced year-over-year smartphone shipment declines in the first quarter of this year. On a worldwide basis, growth is still expected to occur in 2016 but forecasts have now been reduced to single digit levels.

Many of the recent news stories on the topic point to market saturation, particularly in regions like the US, Western Europe, and China, where smartphones have become ubiquitous. But the problem is actually much deeper—people are starting to hold onto their phones longer, extending the lifetimes of the devices.

In a recent survey of over 3,000 consumers across five countries (US, UK, Germany, Brazil and China) conducted by TECHnalysis Research, consumers said they expected to replace their smartphones every 1.8 years. On the surface, that seems fine and is probably in line with what people have done in the past. In response to the same question about notebook PCs, people said they expected to replace those devices every 2.5 years.

In reality, however, notebook PC replacements occur closer to 5 years. In other words, people clearly aren’t good at estimating how long they plan to keep a device. To be fair, I don’t think smartphone replacement times will be double the 1.8-year lifecycle they responded with, but I am certain they will be longer. That is the crux of the challenge for the smartphone market.

As we saw first with PCs and then with tablets, once a market reaches the saturation point, future growth becomes nearly completely dependent on refresh rates and lifecycle—how quickly (or not) you choose to upgrade what you have.

[pullquote]As we saw first with PCs and then with tablets, once a market reaches the saturation point, future growth becomes nearly completely dependent on refresh rates and lifecycle.”[/pullquote]

In the case of smartphones, there are a number of key developments triggering these longer lifetimes. Here in the US, the gradual disappearance of subsidies from the carriers has been a big factor but, in many other parts of the world, there have never been subsidies and people have always had to pay full price for their smartphones. In those markets, and now in the US as well, the bigger issue has been a slowing down of major innovation as smartphones have matured and reached a level of quality and capability that satisfies most people’s needs.

To be clear, I’m not saying there isn’t any innovation going on in smartphones—there clearly is—but once people get a 5” or larger HD display, a good quality camera, lots of storage, speedy network connections, and access to millions of applications and services, most people think their phone is “good enough.” Larger displays in particular have been a key factor in reaching this point and Apple, though they were late to the party, clearly benefited handsomely once they entered the larger phone market with the iPhone 6 and 6 Plus.

Moving forward, it’s going to be much harder to provide the kind of clearly better innovations that are going to make people feel the need to upgrade. In fact, that’s part of the reason I believe carriers, as well as companies like Apple, are creating and strongly pushing programs that enable you to upgrade on a regular basis. Many of these companies are concerned you won’t otherwise upgrade frequently (or at least at a rate they would prefer). Interestingly, early reports on these upgrade programs suggest a reasonable number of people are signing up for them but crucially, not many people have actually turned in their existing phones for new ones. Apparently, many people see these programs almost as a type of insurance they can use if or when they choose to.

The problem isn’t just hardware, either. As people start to do more and more with their smartphones, the amount of information on those devices is increasingly tremendous. That, in turn, makes the actual upgrade process from your existing phone to a new one much more complicated than it used to be. Instead of just having to transfer over your names and numbers, you now have photos, music, videos, applications, settings, and much more.

Even if you use a number of cloud-based applications, you still have to deal with logging back into all of them, often with passwords you’ve long since forgotten. Toss in the fact you’ll likely be moving to a new version of an operating system that may or may not “like” the versions of the applications you use and could require a whole range of upgrades and you are far from a friction-free process. It’s not as bad as upgrading to a new PC, but having lived through a smartphone upgrade process somewhat recently, it’s getting pretty close.

Despite these concerns, the smartphone market is impressively strong, with shipments in the range of 1.5 billion a year. But it seems clear we are entering a new era for the industry and the implications of longer smartphone lifetimes are bound to be far-reaching for device makers, component suppliers, app developers, and more. How companies adjust to this new reality of limited growth will be very interesting to watch.

Podcast: Second Screen Battles, CES Preview

This week Tim Bajarin, Ben Bajarin and Bob O’Donnell discuss the potential challenges and opportunities to deliver related content to a second screen while watching TV, and discuss the change to CES’s organizational name and what that means for the things we’re expecting to see at the upcoming show.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Battle for the Second Screen

Advances in smart TVs, set-top boxes, and cord-cutting services have driven some important improvements in TV viewing for most consumers. But there’s still one glaring hole when it comes to a truly modern connected TV experience: synchronized second screen content.

The vast majority (75-80% according to a recent survey conducted by TECHnalysis Research of over 3,000 consumers across the US, UK, Germany, Brazil and China) of device-owning people who watch TV admit to using their additional devices, such as notebooks, tablets, and smartphones, while viewing. So the obvious question is, why not try and link the two device experiences together?

In fact, the case for doing so gets even stronger when you look into what people are doing on those second screens while they’re watching TV. In the case of PCs, the top five activities conducted on a PC while watching TV are: browsing the web in general, reading personal email, online shopping, browsing the web for content tied to what they’re watching (over 39% said they did this), and reading the news.

On tablets, the top five activities are browsing the web in general, browsing for content tied to what they’re watching (just over 38% of tablet owners responded to this option), reading personal email, reading the news, and social media. Interestingly, for 25-34-year olds, the top activity on a tablet while also watching TV was browsing for TV content-related information.

Even on smartphones, there’s a strong link. Not surprisingly, texting/messaging is the most common smartphone activity while also watching TV, followed by social media, general web browsing, browsing for TV content-related information (36% of respondents), and then reading personal email. Interestingly, texting about content on TV was actually the sixth most common activity on smartphones across all age groups, but was the second most common for 45-54-year olds.

The key takeaway from all this is that nearly 40% of people surveyed are already making the effort to manually tie together what they’re watching on the big screen to the small screen in front of them. Imagine how much higher that percentage could go if there was some mechanism for connecting the devices automatically?[pullquote]Nearly 40% of people surveyed are already making the effort to manually tie together what they’re watching on the big screen to the small screen in front of them. Imagine how much higher that percentage could go if there was some mechanism for connecting the devices automatically?”[/pullquote]

Of course, this is easier said than done. There’s no standardized method of transmitting what people are currently watching on their TVs to other devices, although audio analysis technologies (similar in concept to what Shazam does for music) can—in theory, at least—recognize what people are watching by listening to the audio content of the program. Plus, these technologies can do so in a way that is independent of a show’s programmed time slot, and can compensate for DVR recordings, streaming from the web, and other common forms of TV viewing. The problem is, these recognition technologies have been around for a long time—some quick web searches pointed to initial efforts from almost 10 years ago—and none have found mainstream acceptance.

To my mind, this seems like a great big data/cloud analytics opportunity, so I have to presume work continues to evolve in this area. Even once you can accurately identify what people are watching, however, that still has to be translated into a range of web-based “responses.” At a simple level, taking people to a particular program’s website is a reasonable first step, but there are a whole range of rich opportunities for linking to related content, shopping opportunities, and much more. Plus, with some additional intelligence, there would be an additional level of personalization that would be possible. In other words, if you and I were both watching the same program, we wouldn’t necessarily receive the same links for further browsing and exploration.

Like many of the technologies I expect we’ll see at the upcoming 2016 CES, the concept of a smart, connected TV viewing experience isn’t new. However, that doesn’t mean there isn’t an opportunity to leverage the enormous range of devices and connectivity options now ubiquitous in homes around the world to drive a new set of experiences consumers will truly appreciate. After all, in the tech business, timing is everything.

Podcast: Black Friday, Cyber Monday, Samsung, Yahoo

This week Tim Bajarin, Ben Bajarin, Jan Dawson and Bob O’Donnell analyze the results of Black Friday and Cyber Monday sales, discuss the recent management changes at Samsung’s mobile group, and debate the challenges faced by Yahoo.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Smart Assistants Making Progress…But Slowly

When Adele sings “Hello,” people are clearly listening—to the record-setting tune of over 3.3 million album purchases in a single week. When you greet your smart device with a verbal introduction, however, well, let’s just say the results aren’t quite as clear.

Though they were hailed as “The Next Big Thing” when they were first introduced, Smart Assistants, or Personal Assistants, such as Siri, Google Now, and Cortana, haven’t exactly torn up the charts. Yes, there was a big initial splash—especially after the initial release of Siri on the iPhone 4S in October 2011—but there has been an equivalent, if not even larger, backlash since then. In fact, Siri quickly devolved from something people marveled at to something people joked about.

To be fair, Apple has made significant improvements to Siri since then, and the introduction of both Google Now and Microsoft’s Cortana has raised the bar for the Smart Assistants category as a whole.

In order to get a better sense of where things stand now, I included a few questions on the recent TECHnalysis Research Consumer Device Usage study of 3,012 people across five countries (US, UK, Germany, Brazil and China) to find out who may be using these verbal assistance-based capabilities, what they’re using them for, and what they think about them overall.

The results show that while Smart Assistants are making progress, they still have a long way to go before they really become mainstream tools, particularly with older consumers. The chart below shows which age groups are using which Smart Assistants, and which use none at all.

WW Smart Assistant Usage

©2015, TECHnalysis Research LLC

While there are a number of very interesting points that can be gleaned from this chart, the first thing that stands out is that nearly half of all respondents across all age groups (Total) said they don’t use Smart Assistants at all, and another 9% don’t know what any of the Smart/Personal Assistants actually are. Amongst the 1,024 US respondents, the result was just over 50% saying they don’t use Smart Assistants and 7% who don’t know what they are. (Note that the totals add up to more than 100% because respondents could select all the Smart Assistants they use, with the average number they use being 1.2.) So, for both worldwide (WW) and the US, only 43% of device-owning consumers said they use Smart Assistants. Clearly, this suggests that more work needs to be done to make these voice-based capabilities more compelling (and, likely, more accurate) before a much wider audience will actually use them.

Breaking the results down by age group, only in the 18-24 and 25-34 age groups are there more people who use Smart Assistants than don’t (in the US results, only 25-34s). Also, there is actually a smaller percentage of 18-24-year old Millennials using Smart Assistants than 25-34-year old Gen X’ers (47% vs. 62% in the US and 52% vs. 57% worldwide). These somewhat surprising numbers suggest that younger device users are not necessarily the most proactive when it comes to using the latest features, or that voice-based control isn’t as interesting, or necessary, for younger users as it is for slightly older users (or, most likely, some combination of these two possibilities). Regardless, it reflects a subtle, but potentially important shift about the expectations that future generations of device owners may have.

Interestingly, the activities for which Smart Assistants were used (by the 43% who used them) was actually very similar across age groups, both worldwide and in the US. The chart below shows the top-level results for the full five-country sample of 1,286 Smart Assistant users.

WW Smart Assistant Activities

©2015, TECHnalysis Research LLC

The top activity by far was requests for information searches, followed by asking for directions. (One difference in the US results is that over 59% of US respondents asked for driving directions, versus just under 53% worldwide). Asking to do certain activities on the device, such as play music, launch apps, adjust settings, etc., was done by fewer than half of Smart Assistant users. Truly “smart” activities, such as using a Smart Assistant to make suggestions of other things, music, restaurants, etc., were only done by 1 in 5 Smart Assistant users.[pullquote]It is a bit disappointing to see how few people are using Smart Assistants. It’s also frustrating to see how little they’re being used for truly smart activities.”[/pullquote]

Of course, the other big data point from the first chart was the dominance of Google Now usage over other Smart Assistants, particularly among younger users. It turns out Google Now also edged out both Cortana and Siri on an overall satisfaction rating as well, but just barely, with an average of 3.8 (on a scale of 5), versus 3.7 for Cortana, and 3.6 for Siri. In the US, Google Now and Cortana were statistically tied at 3.9, while Siri had a 3.5.

The notion of a voice-based interface, and the concept of talking at your devices was more science fiction than science fact for a very long time, so the fact that we’re making any progress in this challenging area is unquestionably a good thing. Still, expectations continue to be very (and probably unrealistically) high for these technologies, so it is a bit disappointing to see how few people are using Smart Assistants. It’s also frustrating to see how little they’re being used for truly smart activities—the most common usages are arguably little more than simple typing replacement.

There’s no question that truly Smart Assistants will be a critical part of future devices and services we use, but it’s also increasingly clear that we still have a long way to go before we reach that digital nirvana.

Consumer Device Purchase Trends

The truth is, it’s a bit of a guessing game—even when you ask people their intentions.

Nevertheless, as we enter the holiday shopping season, trying to figure out what devices consumers plan to purchase next becomes a bit of a sport. There are historical trends to study, Black Friday and Cyber Monday ads to pore over, and gut instincts to trust, but ultimately, no one ever really knows for sure what consumer technology products will be winners and what will be losers in a given timeframe.

Despite the uncertainty, people continue to investigate the topic because it’s kind of fun (in a sick, sort of way, I suppose), and because it is critically important to the future of many companies and many individuals.

In my case, my firm, TECHnalysis Research, recently completely a thorough device usage study of over 3,000 consumers across the US, UK, Germany, Brazil and China. The focus of the online study was to get more insight into how people are really using their core devices (PCs, tablets, smartphones, TVs and wearables), but we also asked what devices individuals planned to purchase over the next year.

Note that this doesn’t necessarily translate into what they plan to purchase this holiday season (and most of the responses were actually collected in late September/early October, before most people have planned out their holiday shopping), but it does give a good overall sense of device priorities.

A summary of results for the entire 3,012 person sample are shown below, followed by the US-only results (1,024 consumers).

WW Device Purchase Plans

US Device Purchase Plans

Not surprisingly, smartphones with 5” and larger screens continue to top the list, as many consumers around the world (and even in the US) have yet to make the transition to these incredibly useful devices. What’s interesting, however, is that several PC form factors did well on both a worldwide and US basis. Notebooks were number two across the total of all five countries and desktops were number four. What was surprising, however, was that in the US, notebooks and desktops actually tied for second.

Despite the PC industry’s recent doldrums, the release of Windows 10 has clearly inspired more interest in the category, and that should lead to a reasonably solid 2016 for consumer PC sales. In addition, there’s a large base of much older PCs in need of upgrades, and that, combined with growing interest in PC gaming (thanks to the popularity of things like game streaming channel Twitch), is what likely contributed to the interest in desktop PCs.

Smart TVs were the third most common category for planned purchases on a worldwide basis, but took the fifth spot, just behind non-connected 32”+ size flat panel TVs, in the US. Larger 8”+ tablets were fifth worldwide and sixth in the US.

Looking briefly at the other countries, the top two choices in the UK (in order) were 5”+ smartphones and smart TVs, in Brazil it was notebooks and 5”+ smartphones, and in both China and Germany it was 5”+ smartphones and notebooks.

For the sake of comparison, the same questions were asked in a similar study just over a year ago. The top three responses for the multi-country (WW) group and the top two responses in the US were the same this year as last year. The most noticeable difference was the large jump in desktop PCs, a category that nearly everyone has written off for dead. In addition, there was a modest decline in smaller smartphones, a larger decline for smaller sub-8” tablets, and a modest increase in wearables.

Having conducted decades worth of buying-intention surveys, I can tell you with certainty that next year’s reality won’t match what this year’s results show (people exaggerate their buying intentions, change their minds, and adjust their priorities, etc.) Nevertheless, these types of questions do provide a sense of consumers’ mindsets, which can lead to important insights into where markets may go.

So, if we see a resurgence in desktop PCs next year, remember: you heard it here first.

Have a wonderful Thanksgiving. Enjoy time with family and friends and, hopefully, away from some of your devices!

Podcast: Screenless Wearables, Streaming Mobile Apps, Black Friday

This week Tim Bajarin, Jan Dawson and Bob O’Donnell discuss screenless wearables, the concept of mobile app streaming, and the impact of Black Friday on tech device sales.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Screenless Wearables and New Means of Interaction

The vast majority of attention focused on wearables has been on devices with an integrated screen, such as the Apple Watch. The general consensus seems to be that screens are necessary to provide the kinds of notifications and other forms of information for which many believe wearables are well suited.

But a new breed of screenless wearables are starting to make their mark and I believe they will become an increasingly important part of the wearables industry, especially for wristworn-wearables like smart watches. The latest entries come via luxury watch maker Movado, with help from HP, but there are other interesting examples, including the Chronos add-on disc for traditional watches, that are coming soon.

For certain types of information, such as text messages or maps, screens are really the only option. However, at these very early stages of wearables’ evolution, it’s worth asking important questions about whether or not that type of information is the most appropriate or best data for wearables to offer.

Screenless wearables are clearly more limited in the type and range of information they can offer, but also have the possibility of being more effective with a simpler set of information. Leveraging haptics technologies that provide physical feedback to your body—and then adding in audio-based information—screenless wearables are likely to drive forward completely new user interface paradigms and methods of interaction. Used in conjunction with smartphones, they can also provide a great deal of detailed visual information to the user/wearer of the device—but in a different way.

As mentioned above, screens are great for delivering certain types of information, but they bring with them a number of issues. First of all, they immediately date a device and, arguably, limit their useful lifetime. Screen technology continues to evolve and improve at such a rapid pace that it’s easy to spot older technology by simply looking at the type and quality of the screen it has. Five years from now, today’s wearables screens will look horribly outdated. Compared to many watches—whose designs can still look fresh even several decades later—that’s a step in the wrong direction.

Another problem with screen-based wearables is that they can be a bit too good at notifying both you and others around you that a new piece of data or notification has come in. When your screen lights up your wrist, it’s announcing to the world (and especially those right around you), that something is going on with you. In some cases, it might even be possible for people to read those screens—even in situations that you would prefer that they can’t.

For those of us with older eyes, many of whom grew up as part of a generation who are more accustomed to regularly wear watches, the opposite problem can also occur. Reading a screen designed to fit on your wrist can be a real challenge for many people.

Yet another limitation with screens on wearables is the impact they have on the battery life of the device. Screen-based devices typically measure their battery life in hours or maybe a day, but screenless devices’ battery lives are measured in weeks.[pullquote]I think the limitations of a screenless device could actually end up driving very creative new solutions for device interactions.”[/pullquote]

The trick to making screenless devices effective is going to be smart usage of physical feedback mechanisms and the development of new user interface paradigms. Figuring out how to do things like leverage waveform synthesis to create a range of haptic feedback-based sensations could be incredibly important in creating new types of interaction models. Combine that with some visual cues and a high-quality audio experience—speak to the device and receive useful audio information and/or notifications, for example—and you could set the groundwork for an entirely different way of thinking about and using devices.

Doing this kind of work won’t be easy, but I think the limitations of a screenless device could actually end up driving very creative new solutions for device interactions. In addition, work done in this area should lead to the disappearance of the more obvious aspects of technology, while at the same time delivering more seamless integration of the key capabilities it enables into our everyday lives.

Podcast: iPad Pro, Apple TV, Intel-Tag Heuer Smartwatch

This week Tim Bajarin, Ben Bajarin, Jan Dawson and Bob O’Donnell discuss Apple’s new iPad Pro, the updated Apple TV, and Tag Heuer’s new high-end Android smartwatch, built in conjunction with Intel.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Technological Magic of Autodiscovery

Science fiction author Arthur C. Clarke’s famous quote about the relationship between technology and magic—“Any sufficiently advanced technology is indistinguishable from magic”—has proven to be a remarkably prescient commentary on the state of technology and its impact on people for over 50 years now.

The problem today is that people have become too accustomed to the magic, and it’s getting harder and harder to develop new “tricks.” Most consumer’s expectations for what technology can do have become so high, that they’re starting to look for and demand things that technology should do.

One critical technology capability that hasn’t been realized yet is pain-free interconnections between multiple devices, networks, and services. Sure, there have been great improvements in physical connectivity solutions like USB3 and Thunderbolt 3, along with important improvements in wireless networks like 802.11ac, Bluetooth LE, and Zigbee. We’ve even seen some important steps forward in interconnect protocols, such as AllJoyn, OIC, and Threads—though there certainly are plenty of challenges (and standards battles) to be solved and resolved there.

Even with those developments, however, the process of making multiple devices in a home (or work environment) seamlessly work together is far from magical. Frankly, it’s often maddening. Configuration challenges, set-up hassles, software installation snafus, networking problems, and more often turn what should be an exciting new experience—firing up a new device or service for the first time—into painful drudgery.

As the number of devices and services that people own and regularly use continues to grow, the problem is just getting worse. Even if you can get a peripheral to work with one device, for example, making it work the way it’s supposed to across multiple devices, is far from guaranteed. Similarly, accessing a new service or application from one device might be straightforward, but making it work with several of your devices can be a nightmare.

What we really need is a capability that automatically discovers all the relevant devices in your home (or work environment) and automatically, or “automagically”, not only connects to them, but enables their full capabilities. In other words, not only do the devices become seamlessly and reliably connected—valuable in itself—but any necessary configuration to make it do what it’s supposed to do happens as well—the real prize.[pullquote]What we really need is a capability that automatically discovers all the relevant devices in your home and ‘automagically’ not only connects to them, but enables their full capabilities.”[/pullquote]

Technically, this autodiscovery capability is sometimes called enumeration, but even to today’s jaded consumers, it’s bound to feel like magic. Typical (and even relatively technical) consumers and business users have suffered far too long with having to manually configure and enable each device’s capabilities, and as a result, haven’t really been able to appreciate all that their devices could offer. In fact, autodiscovery might give many consumers a new-found appreciation for many of their existing devices and/or services as it could finally enable them to use these devices or services to their fullest extent.

Though it might seem like we should have had these capabilities for a while now, it turns out that to do this accurately and do it well is very challenging. Thankfully, numerous vendors, including Intel and Microsoft among others, are working hard to enable these capabilities. Leveraging a combination of new hardware and software functionality, they’re starting to make this magical dream a reality.

As we move into the world of connected homes, connected cars and other smart things, the ability to not only connect but fully leverage the intelligence they offer is going to be essential. It’s simply too much to expect average users, whether in consumer or commercial environments, to be able to configure all these devices, let alone fully take advantage of the potential they offer. Plus, many of the most important benefits that new devices and services are going to offer will be dependent on their ability to connect to other devices—essentially, a network effect.

That’s where the “magic” of technology needs to step in and ensure that the real value of all these new devices doesn’t get lost in the set-up shuffle. Without these kinds of autodiscovery improvements, many of the kind of innovations these new devices and services hope to offer could end up being for naught.

IOT’s Biggest Impact? Business Models

The more time I spend delving into the world of the Internet of Things (IOT), the more convinced I am that it’s biggest impact won’t be technological, but business related. Of course, we still have to get off the crazy hype cycle that IOT is on before this can really happen (and we’re likely still several years away from that.) But the changes will come, and they will be big.

To be sure, there are some profound technological developments that will both drive and be derived from the world of IOT. Add intelligence to lots of devices and connect them all together, and the result becomes nearly inevitable.

However, the manner in which business will need to occur—and how companies can react to those new business environments—will likely have an even bigger impact on the tech market landscape. It’s a classic case of disruption, and it’s bound to lead to both the downfall of some current tech giants and the birth/maturation of new ones.

Even now, in the early stages of IOT, we’re starting to see big impacts on business models, especially when compared to traditional IT products. For example, classic TCO (total cost of ownership) and ROI (return on investment) arguments for many critically important IOT products aren’t having the same kind of impact we’ve seen with IT products.

In the past, if you could build a strong-enough TCO/ROI case for a big IT tech expenditure, then you were well on your way to solid sales and success with that product. However, solid TCO/ROI models for IOT-driven solutions like saving energy costs on lighting or HVAC are not translating directly to sales for companies like Enlighted Inc. The company, which sells sensor-driven lighting and occupancy awareness systems for commercial buildings, instead has been forced to build complicated business models that take into account things like future energy credits. Not exactly a simple thing to sell.

The problem for these types of commercial IOT companies is that to really get the benefits of IOT, you often need to do very large (and very expensive) deployments. Smart cities sound like a great concept, for example, until you realize that to truly get the payback for a smart city, you would have to do massive deployments of sensors, gateways, software, services and more. Sure, the results could be great, but there are few cities that have the money to spare to tackle such a project.

This also highlights one of the current fallacies of IOT. While there’s been a great deal of talk about new revenue generation possibilities—such as for smart cities—the fact is, the few real success stories in IOT have been around cost reductions. Now, decreasing operational costs is still a good thing, but it’s very different than increasing revenues, and businesses need to build their business models and business approaches to reflect that reality.

The business model challenges extend beyond complete IOT solutions to devices and components, as well. Semiconductor vendors like ARM, Intel, Qualcomm, nVidia and others like to talk about the tens or hundreds of billions of connected devices that they can power in this new world of IOT. But, to make that happen, they have to move from a financial model where their revenue could be measured in hundreds or at least tens of dollars per device, down to one measured in single-digit dollars, and soon, cents per device. To put it more bluntly, 50 billion devices with 50¢ of semiconductor content, does not a growth business make—compared to today’s 1 billion+ devices at $25-$35 per device.[pullquote]There are still enormous questions about where the real value in a multi-component, multi-vendor IOT solution actually resides.”[/pullquote]

Network vendors like Cisco who tout the enormous amounts of network traffic that these billions of devices will generate won’t necessarily fare much better. Semiconductor improvements and practical realities are going to demand that more of the analytics and data analysis that are a core part of the IOT experience happen at an intelligent endpoint, and don’t need to, nor should (for security reasons), traverse a network.

On top of all this, there are still enormous questions about where the real value in a multi-component, multi-vendor IOT solution actually resides. Is it with the systems integrator, the software maker, the carriers, the network providers, the device makers, or some combination thereof? Divide the pie into too many pieces, and the shares start to get pretty small. Plus, it’s not clear that companies (or individuals, in the case of consumer-focused IOT) will be willing to pay for any additional hardware. Finally, each IOT project will have to address who gets access to the data, and who and how they can monetize it (if at all).

Answers to these kinds of difficult questions—like the fully realized potential of IOT—are still many years away. However, they do start to suggest that companies which have been in more traditional hardware or software businesses, will have to start rethinking their revenue generation methods. It’s likely that they’ll have to create ongoing services that somehow leverage the unique characteristics of their traditional offerings and start charging for those services instead of their traditional products. This transition won’t be easy, but it typifies the kind of challenges that IOT will be bringing to the tech market overall.

Mobility Isn’t Just a Technology, It’s a Mindset

Ready or not, here it comes.

That’s essentially the position businesses find themselves with regard to mobile technology and its influence on not just IT but all aspects of their organizations. The confluence of smartphones, tablets and cloud-based computing services, along with a growing percentage of millennial and Gen Y employees, is leading to a fundamental shift in how businesses are contemplating all things mobile.

There’s a growing sense of inevitability about this mobility trend. Everyone knows it’s going to happen. However, on the map to a mobile-optimized organization, not only is the route unclear, it’s also not at all obvious what the final destination is. This makes navigating the path from the present to an ill-defined future a particularly challenging task.

Thankfully, there are some relatively obvious—though still challenging—goals along the way. Workplace and work device flexibility, for example, are waypoints along the road to a mobile-savvy enterprise toward which many organizations are now striving. Employees, particularly younger ones, are looking for the freedom to be able to do their work on any device, in any location. As simple as that sounds, however, implementing the infrastructure to enable this kind of device and location independence can be difficult, expensive, and often requires some fundamental changes to core IT policies, structure, capabilities, and more.

As a result, many IT organizations take more of a Henry Ford approach to device independence: employees can use whatever device they want, as long as it’s a company-purchased Windows PC that’s actively managed by IT and uses company-purchased or approved connectivity options. Okay, well, maybe not that bad, but it’s probably a lot closer to reality than many IT leaders are willing to admit.

Even if companies are actively embracing BYOD (Bring Your Own Device) and/or other device choice policies, that doesn’t mean they’ve really embraced mobility. In fact, device choice is just the first step.

The real impact of mobility only begins to take hold when companies start rethinking processes, procedures, services, activities, expectations, measurement methods, and many other functions at the very core of how businesses operate. To do that, IT needs to start reworking existing applications or, even better, building new custom mobile applications which take into account a broader mobility mentality.

Despite a few high-profile efforts to do just that (think Apple/IBM), the reality is only a small percentage of companies have done anything more than a few experiments in the area of custom mobile applications. Plus, many of those efforts are actually only being done on behalf of senior management. According to a survey of IT professionals conducted by TECHnalysis Research, while most custom PC applications are deployed to all employees (over 70%), custom tablet or smartphone applications are designed more for senior executives (50%) with only 40% of these mobile apps being deployed to the full range of employees.

However, even the availability of mobile devices and mobile applications does not mean a company has completely embraced mobility. At its core, the move to mobility requires a change in the way companies think about data and how they access, use, and secure it. Mobile devices are forcing companies to deal with these key issues.

Some companies have run into issues with mobility because they haven’t thought through these implications. Instead, they’ve discovered only dipping their toes into the tepid waters of the mobile pool can actually cause more harm than good. Security breaches, lost data, frustrated workers, IT ill-will, and lots of other bad results can befall organizations that don’t fully embrace the mobile mindset and all it entails.[pullquote]Mobility changes everything in business, but it doesn’t replace everything.”[/pullquote]

At the same time, it’s easy to fall into the opposite trap of thinking mobility supplants everything. Despite its importance, mobility doesn’t and shouldn’t come at the expense of other non-mobile devices and application. In other words, while mobility changes everything, it doesn’t replace everything. Traditional PCs and custom enterprise apps aren’t going away just because you add mobility. Instead, organizations need to think about their mobile devices and mobile applications as “companions” to their existing devices, by using the devices and applications best suited to each task and figuring out ways to make them work together.

It’s not an easy process, to be sure. But, if companies really want to innovate, they also need to think creatively about how they integrate mobility into their business mindset.

(If you’d like to learn more, you can also check out the webinar I did on the same topic: Harvard Business Review Webinar: Mobility In the Enterprise, Proactive or Reactive?)