Top 10 Tech Predictions for 2017

Predicting the future is more art than science, yet it’s always an interesting exercise to engage in as a new year comes upon us. So with the close of what was a difficult, though interesting year in the technology business, here’s a look at my predictions for the top 10 tech developments of 2017.

Prediction 1: Device Categories Start to Disappear

One of the key metrics for the relative health of the tech industry has always been the measurement of unit shipments and/or revenues for various categories of hardware-based tech devices. From PCs, tablets and smartphones, through smartwatches, smart TVs and head-mounted displays, there’s been a decades-long obsession with counting the numbers and drawing conclusions from how the results end up. The problem is, the lines between these categories have been getting murkier and more difficult to distinguish for years, making what once seemed like well-defined groupings become increasingly arbitrary.

In 2017, I expect the lines between product categories to become even blurrier. If, for example, vendors build hand-held devices running desktop operating systems that can also snap into or serve as the primary interface for a connected car and/or a smart home system, what would you call that and how would you count it? With increasing options for high-speed wireless connectivity to accessories and other computing devices, combined with OS-independent tech services, bots, and other new types of software interaction models, everything is changing.

Even what first appear as fairly traditional devices are going to start being used and thought of in very different ways. The net result is that the possibility for completely blowing up traditional categorizations will become real in the new year. Because of that, it’s going to be time to start having conversations on redefining how the industry thinks about measuring, sizing, and assessing its health moving forward.

Prediction 2: VR/AR Hardware Surpasses Wearables

Though it’s still early days for head-mounted virtual reality (VR) and augmented reality (AR) products, the interest and excitement about these types of devices is palpable. Yes, the technologies need to improve, prices need to decrease, and the range of software options needs to widen, but people who have had the opportunity to spend some time with a quality system from the likes of HTC, Oculus, or Sony are nearly universally convinced that they’ve witnessed and partaken in the future. From kids playing games to older adults exploring the globe, the range of experiences is growing, and the level of interest is starting to bubble up past enthusiasts into the mainstream.

Wearables, on the other hand, continue to face lackluster demand from most consumers, even after years of mainstream exposure. Sure, there are some bright spots and 2017 is bound to bring some interesting new wearable options, particularly around smart, connected earbuds (or “hearables” as some have dubbed them). Overall, though, the universal appeal for wearables just isn’t there. In fact, it increasingly looks like smartwatches and other widely hyped wearables are already on the decline.

As a result, I expect revenues for virtual reality and augmented reality-based hardware devices (and accessories) will surpass revenues for the wearables market in 2017. While a clear accounting is certainly challenging (see Prediction 1), we can expect about $4 billion worldwide for AR/VR hardware versus $3 billion for wearables. Because of lower prices per unit for fitness-focused wearables, the unit shipments for wearables will still be higher, but from a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.[pullquote]From a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.”[/pullquote]

Prediction 3: Mobile App Installs Will Decline as Tech Services Grow

The incredible growth enabler and platform driver that mobile applications have proven to be over most of the last decade makes it hard to imagine a time when they won’t be that relevant, but I believe 2017 will mark the beginning of that unfathomable era. The reasons are many: worldwide smartphone growth has stalled, app stores have become bloated and difficult to navigate, and, most importantly, the general excitement level about mobile applications has dropped to nearly zero. Study after study has shown that the vast majority of apps that get downloaded rarely, if ever, get used, and most people consistently rely on a tiny handful of apps.

Against that depressing backdrop, let’s also not forget that the platform wars are over and lots of people won, which means, really, that nobody won. It’s much more important for companies who previously focused on applications to offer a service that can be used across multiple platforms and multiple devices. Sure, they may still make applications, but those applications are just front-ends and entry points for the real focus of their business: a cloud-based service.

Popular subscription-based tech services such as Netflix and Spotify are certainly both great example and beneficiaries of this kind of move, but I expect to see many different flavors of services grow stronger in 2017. From new types of bot-based software to “invisible” voice-driven interaction models, the types services that we spend a lot of our 2017 computing time on will be much different than in the mobile apps era.

Prediction 4: Autonomous Drive Slows, But Assisted Driving Soars

There’s no question that autonomous driving is going to be a critical trend for tech industry and automotive players in 2017, but as the reality of the technical, regulatory, and standards-based challenges of creating truly autonomous cars becomes more obvious in the new year, there’s also no question that timelines for these kinds of automobiles will be extended in 2017. Already, some of the early predictions for the end of the decade or 2020 have been moved into 2021, and I predict we’ll see several more of these delays in the new year.

This doesn’t mean a lot of companies—both mainstream and startup—won’t be working on getting these cars out sooner. They certainly will, and we should hear an avalanche of new announcements in the autonomous driving field throughout the year from component makers, Tier 1 suppliers, traditional tech companies, auto makers and more. Still, this is very hard stuff (both technically and legally) and technology that potentially places people’s lives at stake is a lot different than what’s required to generate a new gadget. It cannot, nor should it be, released at the same pace that we’ve come to expect from other consumer devices. If, God forbid, we see some additional fatalities in the new year that stem from faulty autonomous driving features, the delays in deployment could get much worse, especially if they happen via a ridesharing service or other situation where ultimate liability isn’t very clear.

In spite of these concerns, however, I am convinced that we will see some critical new advancements in the slightly less sexy, but still incredibly important field of assisted driving technologies. Automatic breaking, car-assisted crash avoidance and other practical assisted driving benefits that can leverage the same kind of hardware and artificial intelligence (AI)-based software that’s being touted for fully autonomous driving will likely have a much more realistic impact in 2017. Truth be told, findings from a TECHnalysis Research study show that most consumers are more interested in these incremental enhancements anyway, so this could (and should) be a case where the current technologies actually match the market’s real needs.

Prediction 5: Smart Home Products Consolidate

Most of the early discussions around the smart home market has been for standalone products, designed to do a specific function and meant to be installed by the homeowner or tenant. The Nest thermostat, August smart lock, and various security camera systems are classic examples of this. Individually, many of these products work just fine, but as interested consumers start to piece together different elements into a more complete smart home system, problems quickly become apparent. The bewildering array of different technical standards, platforms, connectivity requirements and more often turn what should be a fun, productive experience into a nightmare. Unfortunately, the issue shows few signs of getting better for most people (though Prediction 6 offers one potential solution.)

Despite these concerns, there is growing interest in several areas related to smart homes including distributed audio systems (a la Sonos), WiFi extenders and other mesh networking products, and smart speakers, such as Amazon’s Echo. Again, connecting all these products can be an issue, but so are more basic concerns such as physical space, additional power adapters/outlets, and all the other aspects of owning lots of individual devices.

Because of these issues, I predict we’ll start to see new “converged” versions of these products that combine a lot of functionality in 2017. Imagine a device, for example, that is a high-quality connected audio speaker, WiFi extender and smart speaker all in one. Not only will these ease the setup and reduce the physical requirements of multiple smart home products, they should provide the kind of additional capabilities that the smart home category needs to start appealing to a wider audience.

Another possibility (and something that’s likely to occur simultaneously anyway), is that the DIY market for smart home products stalls out and any potential growth gets shifted over to service providers like AT&T, Comcast, Vivint and others who offer completely integrated smart home systems. Not only do these services now incorporate several of the most popular individual smart home items, they’ve been tested to work together and give consumers a single place to go for support.

Prediction 6: Amazon Echo Becomes De Facto Gateway for Smart Homes

As mentioned in Prediction 5, one of the biggest challenges facing the smart home market is the incredibly confusing set of different standards, platforms, and protocols that need to be dealt with in order to make multiple smart home products work together. Since it’s extremely unlikely that any of these battles will be resolved by companies giving up on their own efforts and working with others (as logical and user-friendly as that would be), the only realistic scenario is if one device becomes a de facto standard.

As luck would have it, the Amazon Echo seems to have earned itself that de facto linchpin role in the modern smart home. Though the Echo and its siblings are expected to see a great deal of competition in 2017, the device’s overall capabilities, in conjunction with the open-ended Skills platform that Amazon created for it, are proving a winning combination. Most importantly, the Echo’s Smart Home Skill API is becoming the center point through which many other smart home devices can work together. In essence, this is turning the Echo into the key gateway device in the home, allowing it to essentially “translate” between devices that might not otherwise be able to easily work together.

While other devices and dedicated gateways have tried to offer these capabilities, the ongoing success and interest in the Echo (and any ensuing variants) will likely make it the critical component in smart homes for 2017.[pullquote]The Amazon Echo’s Skills platform is becoming the center point through which other smart home devices can work together.”[/pullquote]

Prediction 7: Large Scale IoT Projects Slow, But Small Projects Explode

The Internet of Things (IoT) is all the buzz in large businesses today, with lots of companies spending a great deal of time and money to try to cash in on the hot new trend. As a number of companies have started to discover, however, the reality of IoT isn’t nearly as glamorous as the hype. Not only do many IoT projects require bringing together disparate parts of an organization that don’t always like, or trust, each other (notably, IT and operations), but measuring the “success” of these projects can be even harder than the project itself.

On top of that, many IoT projects are seen as a critical part of larger business transformations, a designation that nearly guarantees their failure. Even if they aren’t part of a major transformation, they still face the difficulty of making sense of the enormous amount of data that instrumenting the physical world (a fancy way of saying collecting lots of sensor data) entails. They may generate big data, but that certainly doesn’t always translate to big value. Even though analytics tools are improving, sometimes it’s just the simple findings that make the biggest difference.

For this reason, the potential for IoT amongst small or even tiny businesses is even larger. While data scientists may be required for big projects at big companies, just a little common sense in conjunction with only a few of the right data points can make an enormous difference with these small companies. Given this opportunity, I expect a wide range of simple IoT solutions focused on traditional business like agriculture and small-scale manufacturing to make a big impact in 2017.

Prediction 8: AI-Based Bots Move to the Mainstream

It’s certainly easy to predict that Artificial Intelligence (AI) and Deep Learning will have a major impact on the tech market in 2017, but it’s not necessarily easy to know exactly where the biggest benefits from these technologies will occur. The clear early leaders are applications involving image recognition and processing (often called machine vision), which includes everything from populating names onto photos posted to social media, to assisted and autonomous driving features in connected cars.

Another area of major development is with natural language processing, which is used to analyze audio and recognize and respond to spoken words. Exciting, practical applications of deep learning applied to audio and language include automated, real-time translation services which can allow people who speak different languages to communicate with each other using their own, familiar native tongue.

Natural language processing algorithms are also essential elements for chatbots and other types of automated assistance systems that are bound to get significantly more popular in 2017, particularly in the US (which is a bit behind China in this area). From customer assistance and technical support agents, through more intelligent personal assistants that move with you from device to device, expect to have a lot more interactions with AI-driven bots in 2017.

Prediction 9: Non-Gaming Applications for AR and VR Grow Faster than Gaming

Though much of the early attention in the AR/VR market has rightfully been focused on gaming, one of the main reasons I expect to see a healthy AR/VR hardware environment in the new year is because of the non-gaming applications I believe will be released in 2017. The Google Earth experience for the HTC Vive gave us an early inkling of the possibilities, but it’s clear that educational, training, travel and experiential applications for these devices offer potential for widespread appeal beyond the strong, but still limited, hard-core gaming market.

Development tools for non-gaming AR and VR applications are still in their infancy, so this prediction might take two years to completely play itself out. However, I’m convinced that just as gaming plays a critical but not overwhelming role in the usage of smartphones, PCs and other computing devices, so too will it play an important but not primary role for AR and VR devices. Also, in the near term, the non-gaming portion of AR and VR applications is quite small, so from a growth perspective, it should be relatively easy for these types of both consumer and business-focused applications to grow at a faster pace than gaming apps this year.

Prediction 10: Tech Firms Place More Emphasis on Non-Tech Fields

While many in the tech industry have great trepidation about working under a Trump administration for the next several years, the incoming president’s impact could lead to some surprisingly different means of thinking and focus in the tech industry. Most importantly, if the early chatter about improvements to infrastructure and enhancements to average citizen’s day-to-day lives come to pass, I predict we will see more tech companies making focused efforts on applying their technologies to non-tech fields, including agriculture, fishing, construction, manufacturing, and many more.

While the projects may not be as big, as sexy or as exciting as building the coolest new gadgets, the collective potential benefits could prove to be much greater over time. Whether it’s through simple IoT-based initiatives or other kinds of clever applications of existing or new technologies, the opportunity for the tech industry to help drive the greater good is very real. It’s also something I hope they take seriously. Practical technologies that could improve the crop yields by only a few percent of not just a few of the richest farms, but of all the smallest farms in the US, for example, could have an enormously positive impact on the US economy, as well as the general population’s view of the tech industry.

Some of these types of efforts are already underway with smaller agro tech firms, but I expect more partnerships or endeavors from bigger firms in 2017.

Cars as Client Devices

It’s no secret that an enormous amount of advanced tech hardware is making its way into today’s automobiles. Whether it’s for assisted or autonomous driving features, advanced infotainment systems or simple safety enhancements, modern cars are getting a big injection of cool new hardware.

Software, on the other hand, has been a bit more muted. Oh sure, there’s the user interface (UI) on the ever-expanding main entertainment and navigation display, but the truth is there are a lot more software efforts going on beneath the hood (literally in this case). In fact, at the upcoming CES show in Las Vegas, I expect to see several announcements related to car-based software and services that turn your automobile into a nearly full-fledged client computing device.

Traditionally, auto-based services were called telematics, but early versions were limited to basic functions such as what’s been found in GM’s OnStar: a separate telephony service for roadside assistance and beaming back car diagnostic data to the auto company’s headquarters.

Today, there’s an enormous range of different software built into cars, from middleware, RTOS (real-time operating systems—such as Blackberry’s QNX or Intel’s Wind River), to artificial intelligence-based inference engines, and beyond.

In fact, there can be over 10 million lines of code in a modern luxury car, working across all the car’s various computing elements, from 150+ ECUs (engine control units—each of which typically runs a particular auto subsystem, such as heating and air conditioning, in addition to portions of the engine, etc.), to more traditional CPUs and GPUs from the likes of nVidia, Intel, Qualcomm and others.

While much of that software will never be seen or directly interacted with by individuals—it’s part of a car’s overall controls—more and more of it is starting to surface through the car’s driver and passenger-focused displays. Many assisted or autonomous driving systems, for example, do provide some visual cues or messages about what they’re doing, though most of their work happens in the background automatically.

In the case of entertainment interfaces, of course, we’ve started to see the implementation of Apple’s CarPlay and Google’s Android Auto. In neither case, however, do Apple and Google provide the entire user interface for the vehicle for two key reasons. First, carmakers are very reluctant to give up the entire user experience to an outside brand. They want and need to “own” the relationship with their customer by making sure it’s a GM experience or a Ford experience or a Porsche experience, etc. Second, neither Apple nor Google have access to the vast majority of software running on the automobile because of the hardened walls between subsystems. As a result, they can only interact with a tiny fraction of the software running in a vehicle. (Waymo, the recent autonomous car spinout from Google, and Apple’s rumored Titan car project, are likely working on many pieces of this more invisible software, among other things, FYI.)

In the near term, however, the next set of auto-related software developments are likely to be extensions and additions to popular software and services that get more fully integrated into cars and turn them into first-class client devices. Now that PC and mobile phone-like hardware is being embedded into cars, along with cellular connectivity and larger, high-resolution displays, it just makes sense to do so.[pullquote]The next set of auto-related software developments are likely to be extensions and additions to popular software and services that get more fully integrated into cars and turn them into first-class client devices.[/pullquote]

At a basic level, think about entertainment services like Spotify, Netflix and others coming natively to cars, or imagine tighter integration with good ‘ol PIM (Personal Information Management) software, such as contacts, calendars, etc. Incorporating things like meeting updates, conference call dial-in information, and other elements directly into your car instead of via a smartphone app could prove to be very beneficial. Not only would it improve the convenience and integration of using them in your car, it could have a dramatically positive impact on safety. In addition, if texting and other forms of messaging are directly integrated into car displays, for example (and more importantly, can therefore be automatically disabled based on the car’s speed), that could do more to save lives than any autonomous driving system.

Note that because many of these capabilities will be delivered as services, the car doesn’t need to be running a full mobile OS and the apps won’t have be to delivered in a native OS format. An HTML5-capable browser is likely all that’s necessary, making it easier for car vendors and Tier 1 OEMs to incorporate these software features into their designs, as well as increasing the useful lifetime of the car’s technology.

Looking forward, it’s clear that we’re still at the very early stages of bringing significantly more intelligence and capabilities into our cars. Progress is being made, but when you start thinking more deeply about the potential, the full promise of smart cars is yet to come.

Podcast: Autonomous Cars, Uber, Waymo, Facebook Fake News

In this week’s Tech.pinions podcast Jan Dawson and Bob O’Donnell chat about developments in autonomous and connected cars, with a specific focus on Uber’s SF experiments, Google spinning out Waymo, and previewing car-related news at CES. In addition, they cover Facebook’s efforts to attack the fake news problem.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Workplace of the Future

To no one’s surprise, how and where we work matters to people. Not just the company you work for, but the physical environment, the culture, the people, and the tools you use to get things done.

Intuitively, that’s obvious of course, but when you start to dig into exactly what it is that people do at work, where they work and what they use, you start to see a fascinating picture of current workplaces—as well as where they’re headed.

That was exactly the intention of the latest TECHnalysis Research study—fielded to over 1,000 US employees across a range of industries during the past week—and I’m pleased to report that the results do not disappoint.

At a high level, people only spend about 46% of their average 43-hour work week in a traditional office or cubicle environment. We’ve been witnessing a shift away from those workspaces for a long time, but the move is likely to accelerate as most workers believe that the percentage will drop to just under 41% in two years.

What’s surprising, however, is that the biggest increase won’t be coming from trendy new alternative workspaces or other non-traditional worksites. Instead, it’s working at home. Toiling in your PJs (or whatever attire you choose to wear at home) is expected to jump from 11% of the total work week to 16% in two years.

Directly related is the growing importance of work time flexibility. In fact, when asked to rank the importance of a company’s tech-initiatives that keep employees happy and productive at work, the number one choice on a rating of eight alternatives was work time flexibility.

Not surprisingly, when people were asked in a separate question about the benefits of working at home, the top reason they cited was—you guessed it—work time flexibility.

Clearly, the move to mobile computing devices, more cloud-based applications, and internal IT support for enabling work from remote locations has had a large impact on employee’s expectations about how, when, and where they can work, and, well, there’s no place like home.[pullquote]The move to mobile computing devices, more cloud-based applications, and internal IT support for enabling work from remote locations has had a large impact on employee’s expectations about how, when, and where they can work, and, well, there’s no place like home.[/pullquote]

From a collaboration perspective, there have been a number of advancements around both software and hardware being used in various workplaces. As expected, usage of these various tools is mixed and interest for them can vary quite a bit by age. At a basic level, for example, email is still the top means of collaboration with both co-workers (39% of total communications) and outside contacts (34%), with phone calls second (25% and 32% respectively) and texting third (12% for both groups). Among 18-24-year old millennial workers at medium-sized companies (100-999 employees), however, social media with outside contacts was 12% of all communications versus only 6% for the total sample.

Collaborative messaging tools like Slack and Facebook’s Workplace still showed only modest usage at 4% overall, but again 18- to 24-year old millennials at medium sized-companies nearly doubled that usage at about 7.5%. More importantly, while 1/3 of total respondents said their companies offered a persistent chat tool like Slack, another 31% said they wished their companies did.

From a hardware perspective, 32% of employees said their companies had large interactive screens in their conference rooms (a la Microsoft’s Surface Hub, which the company just announced was being well received in the market) and another 31% are hoping to see something like that installed at their workplaces sometime soon.

Interestingly, the videoconferencing aspect of these and other devices also drew some distinct, age-based responses. About 25% of total respondents said they used video the vast majority of the time when making an audioconference call, but that jumped to nearly 40% for younger workers (under 44) at medium-sized companies. The group that found video more effective during meetings was actually the 35-44 group, both in medium and large-sized companies. In each case the Gen X and Gen Y’ers in that group found it more useful than both the younger and older employees.

Finally, one insight from the study highlights an IoT opportunity in today’s workplace. A technology that was widely requested was an app or service that would allow workers to individually adjust their personal work area’s temperature and airflow. While that could be challenging to achieve, there’s clearly an interest for companies willing to tackle it.

Today’s workspaces are in an interesting state of flux, with a lot of attention being placed on attracting and retaining younger workers. While data from this study clearly supports some of those efforts, the results also show that many of the more traditional methods of communication and collaboration still play a dominant role—even with younger workers. As companies move to evolve their workplaces and vendors adjust to create products and services for these new environments, it’s important to keep these basics in mind.

Podcast: Microsoft WinHEC, Amazon Go, Apple Movies

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell chat about the announcements from Microsoft’s WinHEC conference, the launch of Amazon’s innovative Go retail store, and Apple’s interest in getting movies on iTunes shortly after their theatrical releases.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Multipurpose, Multifunction Tech Devices to Drive Future Growth

Do more with less. It’s a great personal mantra, a worthwhile business and societal goal, and increasingly, likely to be a necessity for companies who are designing and building tech products.

The challenge is that we’re being overrun by a huge number of more specialized, limited function devices. Growing interest in “smart devices” and the Internet of Things (IoT) is driving an explosion of creativity that, in turn, has brought us to our present state. While it’s great to have access to this amazing array of gadgets (with a lot more coming, based on the pre-CES PR hype that’s currently in full throttle mode), it’s also becoming increasingly clear that we’re hitting gadget overload.

As the cost of critical tech components come down, and we race towards an all-digital world, the possibilities for creating new options to add to our already large collection of computing, digital entertainment, and other types of devices are nearly limitless. There really are some amazing things that can be built with the tools we have (and many more which have yet to be conceived).

But we’re quickly starting to run into several major challenges. First, we have the obvious—though not often widely acknowledged—problem of time; as in, we’re already busy with what we have, so how can we possibly add more? In addition, there’s the challenge of complexity. The more devices we have, the more challenging it becomes to incorporate them into our daily lives.

At a functional level, the biggest problem is that many new devices are simply adding to or improving upon a single capability instead of consolidating multiple capabilities into a single device.[pullquote]The biggest problem is that many new devices are simply adding to or improving upon a single capability instead of consolidating multiple capabilities into a single device.”[/pullquote]

For example, in the smart home arena, we’ve seen an array of different networked products designed for use around our homes. We can use Sonos or something similar for distributed audio, new WiFi mesh products like Eero to improve the quality of our home networks, and smart speakers like Amazon’s Echo or Google’s Home to perform a variety of different personal assistant or home automation tasks.

Each of these products were designed to offer a best-in-class experience for a set of specific functions, and all of them are being well received. However, they each work on their own.

Moving forward, what’s going to make a lot more sense is an intelligent system that could combine these capabilities into a single product. Even though WiFi and other wireless technologies have removed the hassles of network connections, all of these various devices still need power, still take up physical space, still must be configured separately, and ultimately, end up creating a very complex environment that’s challenging even for tech enthusiasts to manage.

A single combined product that offers multiple purposes and/or functions would end up being a much better choice for most consumers who are only just starting to investigate these kinds of smart home products.

It’s not only a problem for smart home products either. With mobile devices, where people often purchase Bluetooth speakers, extra batteries, chargers and stands, it also makes sense to start combining several of the capabilities of individual products into a combined add-on to make people’s lives just a bit easier.

Of course, making good multifunction products is a challenge. In fact, one of the main reasons many companies have avoided trying them is that they’ve been concerned that if one element of a combined solution is considered subpar, the entire product suffers. That’s certainly still true, but as our lives become increasingly cluttered with digital gadgets, there are going to be strong arguments for taking the plunge—it’s going to become a practical necessity.

Another issue is that it can be difficult for smaller, more innovative companies to succeed in a multipurpose device world. It typically takes the combined skill sets and intellectual property (IP) of larger companies to successfully merge multiple technologies into a single product. Even for larger companies that can be a challenge. The real trick is to look past a single idea—no matter how clever or great it is—and to build a more comprehensive system that integrates several technologies into one.

Certainly, not every scenario nor every product is well suited for multifunction or multipurpose applications. However, if we don’t start seeing more products with combined features and capabilities then I’m afraid that, instead of doing more with less, we’ll end up doing less with more.

The Magic Inside Your Devices

Sometimes, it’s what’s inside that counts more than what we can see on the outside. That’s certainly the case with people, and increasingly, I think, it’s going to be the case with tech devices.

Many of the most impressive breakthroughs in our favorite gadgets are driven almost completely by critical new breakthroughs in component technologies: chips and other semiconductors, displays, sensors, and much more. Just this week, in fact, there were reports that Apple might offer a curved display on next year’s iPhone, and that HP Enterprise had debuted the first working prototype of a dramatically different type of computing device that they dub The Machine.

In both cases, it’s critical component technologies that are enabling these potentially breakthrough end products. In the iPhone’s case, it would be because of bendable OLED displays being produced by companies such as LG Display and Samsung Electronics’ display division. For The Machine, HP’s own new memory and optical interconnect chips are the key enablers for computing performance that’s touted to be as much as 8,000 times faster than today’s offerings.

Long-time tech industry observers know that the real trick to figuring out where product trends are going is to find out what the most important component technologies being developed are, then learn about them and their timeline for introduction. That isn’t always as easy as it sounds, however, because semiconductor and other component technologies can get very complicated, very quickly.

Still, there’s no better way to find out the future of tech products and industry trends than to dive into the component market headfirst. Fortunately, many major tech component vendors are starting to make this easier for non-engineers, because they’ve recognized the importance of telling their stories and explaining the unique value of their products and key technologies.

From companies like Sandisk describing the performance and lifetime benefits of solid state drives (SSDs) inside PCs, to chipmakers like nVidia describing the work in artificial intelligence (AI) that GPUs can achieve, we’re starting to see a lot more public efforts to educate even dedicated consumers, as well as investors and other interested observers, to the benefits of critical component technologies.

Given the increasing maturity and stabilization of many popular tech product categories, I believe we’re going to start seeing an increased emphasis on changes to the “insides” of popular devices. Sure, we’ll eventually see radical outward-facing form factor changes such as smartphones with screens you fold and unfold, but those will only happen once we know that the necessary bendable components can be mass produced.[pullquote]Given the increasing maturity and stabilization of many popular tech product categories, I believe we’re going to start seeing an increased emphasis on changes to the “insides” of popular devices.”[/pullquote]

Of course, the ideas behind what I’m describing aren’t new. Starting in the early 1990s and running for many years, chip maker Intel ran an advertising campaign built around the phrase “Intel Inside” to build brand recognition and value for its CPUs, or central processing units–the hidden “brains” inside many of our popular devices.

The idea was to create what is now commonly called an ingredient brand—a critical component, but not a complete, standalone product. The message Intel was able to deliver (and that still resonates today) is that critical components—even though you typically never see them—can have a big influence on the end device’s quality, just as ingredients in a dish can have a large influence on how it ultimately tastes.

Since then, many other semiconductor chip, component and technology licensing companies (think Dolby for audio or ARM for low-power processors, for example) have done their own variations on this theme to build improved perceptions both of their products and the products that use them. Chip companies like AMD, Qualcomm, and many others, are also working to build stronger and more widely recognized brands that are associated with important, but understandable technology benefits.

Most consumers will never buy products directly from these and other major component companies. However, as tech product cycles lengthen and industry maturity leads to slower changes in basic device shapes and sizes, consumers will start to base more of their final product purchase decisions on the ingredients from which those products are made.

Podcast: Holiday Tech Shopping Predictions

In this week’s Tech.pinions podcast Carolina Milanesi and Bob O’Donnell chat about the upcoming holiday shopping season and the potential opportunities and challenges for a wide variety of different tech devices.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Virtual Experiences Will Drive VR Devices to Mainstream

Sure, the gaming side is cool.

Battling space aliens or shooting bad guys with virtual reality products like Google’s new DayDream VR, Samsung’s Gear VR, Sony’s Playstation VR, or PC-driven systems like the HTC Vive and Oculus Rift is a blast. But despite all the focus, I don’t think gaming will bring VR into mainstream acceptance.

In fact, having recently had the opportunity to spend some quality time with an HTC Vive headset powered by one of first notebooks certified to support high-quality VR—the Alienware 15 from Dell, powered by nVidia’s GTX 1070 GPU—I am even more convinced.

The challenge is that much of the VR-based games are designed for and targeted towards hard-core gamers, which only make up a fraction of even those who play computer or smartphone-based games—let alone the general population.

Instead, to reach a wider audience, VR experiences and applications like virtual travel need to take center stage. There have already been some very interesting case studies done with bringing these types of non-gaming VR experiences to the elderly. I’m certain people of all ages will quickly become attracted to VR once they get a chance to try these devices with the right kind of applications.

A great recent example is the newly released Google Earth app for Vive, an awe-inspiring example of how powerful and transformational VR can be. As with other iterations of Google Earth, you can explore any location on the planet, leveraging the impressive collection of satellite imagery Google has collected, or you can view some pre-designed “tours” of famous locations around the world. With the VR version, however, instead of just looking at these locations, you start to get a sense that you’re actually in them. In fact, once you’ve tried the VR version, you realize the whole Google Earth concept was really made for virtual reality—it just won’t be the same anymore on other platforms.[pullquote]The newly released Google Earth app for Vive is an awe-inspiring example of how powerful and transformational VR can be.”[/pullquote]

As impressive as it is, however, Google Earth VR also highlights some of the challenges of current VR devices and experiences, particularly around the display resolution of current headsets. Some of the 3D buildings in cities, for example, look a bit “cartoon-like” because none of these systems have the graphical resolution (nor the data resolution behind them) to create a life-like viewing experience. Don’t get me wrong—it’s still great, but you can tell that we’re still in the early stages of VR technology development for some of the really demanding applications.

The challenges of the hardware setup also highlight that PC-based VR is not quite ready for the mainstream yet either. Though the Alienware notebook is certainly a lot easier to move around than the big desktop rigs that have been necessary for VR until very recently (and the smaller new 13” Alienware VR-ready notebook is lighter still), all the wires that the HTC Vive headset and its various accessories require makes mobility an unlikely option. HTC did just announce a new wireless accessory for Vive in China, but while it removes the direct wired connections from the Vive headset to the dongle box that plugs into the PC, there are still a lot of pieces that need to be powered and connected.

Despite some of these hassles, the result is worth it: the all-encompassing 360° view that the Vive/Alienware combo provides can be quite impressive, particularly on content specifically designed for VR.

A surprisingly compelling example of this comes from Jaguar’s new introductory experience for their upcoming electric car. Though it’s essentially a VR product brochure for the newly announced vehicle, the application does a remarkable job of utilizing current generation VR technology to let you truly get inside and experience the car. Not only do you get to see via 3D models how different elements of the car function and come together, you can also view and explore the car’s interior design and layout. Jaguar used this VR experience at the car’s recent launch event in LA, but it’s just as compelling now for anyone who wasn’t there. More importantly, it highlights how companies will be able to leverage high-quality VR for creating some very persuasive marketing materials.

The educational opportunities for VR are also enormous. From explorations of human anatomy, to science lessons on how atoms work, to virtual field trips, it’s easy to imagine the kinds of applications that high-quality VR devices will start to enable. In fact, I wouldn’t be surprised to start seeing these kinds of applications show up on Sony’s PlayStation VR—despite its gaming heritage.

Once more people have the opportunity to see and experience the kinds of new possibilities that non-gaming VR can bring them, I’m certain we’ll start to see the market grow well beyond its currently modest size.

Ready or Not, We’re Entering an AI World

The tech landscape as we know it is about to be obliterated.

No, I’m not talking about the impact of a Trump presidency, but something bigger—much bigger.

The impact of Artificial Intelligence (AI) and related technologies, including machine learning, deep learning and neural networks, is now starting to be felt almost everywhere we look. (See my previous column “Learning About Deep Learning” for more.) From everyday devices like PCs and smartphones, to media companies like Facebook, to smart home security services, to connected cars and even to medical diagnostics, the influence of AI is growing rapidly.

Not only is it changing the products and services we use, it’s dramatically reshaping the technical infrastructure behind cloud-based services ranging from web searching to communications bots to personalized advertising (unfortunately!) and much more.

Driving improvements in AI performance is also becoming a key motivating factor for product developments and strategic partnerships. In fact, it seems like virtually all the biggest news-related announcements from tech vendors now have some kind of AI-related angle.

On the semiconductor side, for example, nVidia, AMD, Intel, Qualcomm and many others are focused on creating chips and software that can drive massive improvements in deep learning in large data centers. At this week’s SC16 supercomputing conference, for example, nVidia talked about its efforts on the Cancer Moonshot, a project it’s working on with the National Cancer Institute and the US Department of Energy to deliver a decade of advances in cancer research in just five years thanks to GPU-driven AI applications.

AMD unveiled a new partnership with Google for using its latest GPUs in Google’s Compute Engine and Cloud Machine Learning services, as well as enhancements to its open-source AI-focused Radeon Open Compute Platform (ROCm).

In the world of media, Facebook just announced that it’s planning to leverage AI to help combat its problem with fake news stories. In the devices world, Apple has talked about its AI efforts to help make Siri smarter about your needs and interests, while still maintaining privacy. Google, meanwhile, is leveraging AI to build up a whole range of services that are both more contextually aware of the environments you might find yourself in, as well as personally aware of the specific ways you like to use them.[pullquote]The question is, are we ready for the onslaught of AI? It’s clear that tech-related companies certainly are. But for individuals, the answers are a bit more opaque.[/pullquote]

In short, we’re being surrounded by the first significant fruits from the long-growing but previously near-barren AI tree. For many, these fruits may not offer much taste yet, but it’s clear from early nibbles, that we’re due for an explosion of flavor.

The question then is, are we ready for the onslaught of AI? Based on numerous signs, it’s clear that tech-related companies certainly are. But for individuals, the answers are a bit more opaque. Sure, it’s easy to get excited about the possibilities that AI and related technologies offer, but there can be downsides as well, particularly regarding privacy.

The real trick behind many AI-based products and services is to recognize patterns and then react to those patterns based on previous knowledge about an individual’s preferences. In a positive case, a digital assistant might be able to use that knowledge to help you make better decisions.

As many have started to recognize, however, technologies with good intentions can often be used in unexpected and negative ways. The same set of data about your habits and preferences could be leveraged by criminals to figure out how and when to digitally burglarize you, for example.

Unfortunately, understanding not just the intentions but the decidedly human biases that creep into (or even form the foundation of) the algorithms that drive AI-based products can be very challenging. Nevertheless, as the AI era dawns around us, it’s best to be prepared for a wide range of potential outcomes, with the knowledge that there are bound to be few unpleasant bumps along the way.

Podcast: The Trump Effect: The Election’s Impact on Tech

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell analyze the recent US election results and discuss what the potential impacts could be on the US and worldwide tech market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Best Automotive Tech Opportunity? Make Existing Cars Smarter

Everyone, it seems, is excited about the opportunity offered by smart and connected cars. Auto companies, tech companies, component makers, Wall Street, the tech press, and enthusiasts of all types get frothy at the mouth whenever the subject comes up.

The problem is, most are only really excited about a small percentage of the overall automobile market: new cars. In fact, most of the attention is being placed on an arguably even smaller and unquestionably less certain portion of the market: future car purchases from model year 2020 and beyond.

Don’t get me wrong; I’m excited about the capabilities that future cars will have as well. However, there seems to be a much larger opportunity to bring smarter technology to the hundreds of millions of existing cars.

Thankfully, I’m not the only one who feels this way. In fact, quite a few companies have announced products and services designed to make our existing cars a bit smarter and technically better equipped. Google, T-Mobile, and several lesser-known startups are beginning to offer products and services designed to bring more intelligence to today’s car owners.

While there hasn’t been as much focus on this add-on area, I believe it’s poised for some real growth, particularly because of actual consumer demand. Based on recent research completed by TECHnalysis Research and others, several of the capabilities that consumers want in their cars are relatively straightforward. Better infotainment systems and in-car WiFi, for example, are two of the most desired auto features, and they can be provided relatively easily via add-on products.[pullquote]I’m excited about the capabilities that future cars will have, but there seems to be a much larger opportunity to bring smarter technology to the hundreds of millions of existing cars.”[/pullquote]

On the other hand, while fully autonomous driving may be sexy for some, the truth is, most consumers don’t want that yet. As a result, there isn’t going to be a huge demand for what would undoubtedly be difficult to do in an add-on fashion (though that isn’t stopping some high-profile startups from trying to create them anyway…but that’s a story for a different day).

In the case of Google, the company’s new Android Auto app puts any Lollipop (Android 5.0)-equipped or later Android phone into an auto-friendly mode that replicates the new in-car Android Auto interface. The screen becomes simplified, type and logos get bigger, options become more limited (though more focused), and end users start to get a feel for what an integrated Android Auto experience would be like—but in their current car.

The quality of the real-world experience will take some time to fully evaluate, but the idea is so simple and so clever that you have to wonder when Apple will offer their own variation for CarPlay (and maybe why they didn’t do it first…).

T-Mobile partnered with Chinese hardware maker ZTE and auto tech software company Mojio to provide an in-car WiFi experience called SyncUP DRIVE that leverages an OBD-II port dongle device that you plug into your car (most cars built since 1996). While several other carriers offer OBD-II dongles for no cost (you do have to pay for a data plan in all cases), the new T-Mo offering combines the WiFi hotspot feature with automotive diagnostics in a single device thanks to the Mojio-developed app.

Several startups I’ve come across also have other types of in-car tech add-ons in the works, many of which are focused on safety-applications. I’m expecting to see many compute-enabled cameras, radar, and perhaps even lidar-equipped advanced driver assistance systems (ADAS) add-ons at next year’s CES show, some of which will likely bring basic levels of autonomy to existing cars. The challenge is, the more advanced versions of these solutions need to be built for specific car models, which will obviously limit their potential market impact.

Car tech is clearly an exciting field and it’s no surprise to anyone that it’s becoming an increasingly important purchase factor, particularly for new cars. However, it may surprise some to know that the in-car tech experiences still lag the primary car purchase motivators of price, car type, looks, performance, etc. In that light, giving consumers the ability to add-on these capabilities without having to purchase a whole new car, seems to make a lot of sense—especially given the roughly decade-long lifetime for the average car.

Obviously, add-ons can’t possibly provide the same level of capabilities that a grounds-up design can bring, but many consumers would be very happy to bring some of the key capabilities that new cars offer into existing models. It’s going to be an exciting field to watch.

It’s Time for an IoT Security Standard

The writing has been on the wall for some time. Worse, the recent DNS attack that brought down portions of the Internet strongly suggest that previously predicted concerns have become unpleasant realities.

The problem? Security, or the lack thereof, for the billions of things getting connected to the Internet. Unfortunately, enormous percentages of smart home security cameras, connected DVRs, industrial equipment controllers, wearables, medical equipment, cars, and many more devices are being put online with little to no security protection.

As a result, many of these devices are subject to hacking, in some cases, with potentially life-threatening results. And to make things worse, many are also vulnerable to be unwillingly overtaken and silently re-used in other types of cyber-attacks, like the DNS attack that rendered many popular web sites unreachable a little over a week ago.

This nearly complete lack of security has been talked about by some tech industry observers for years. But despite all the talk, little real action is being taken on an industry-wide basis.

Given the seriousness of the problem and its potential impact not only on our daily lives, but also on the security of critical infrastructure and even national security, it’s surprising and somewhat shocking how much inaction there has been. After all, devices that plug into the wall to get power require approval before other companies will sell them in the US, so why shouldn’t any device that gets “plugged” into the Internet require an approval process as well?[pullquote]Devices that plug into the wall to get power require approval before other companies will sell them in the US, so why shouldn’t any device that gets “plugged” into the Internet require an approval process as well?[/pullquote]

Many of the early electrical safety certification tests developed by UL (previously Underwriters Laboratories) were developed for the safety of consumers, but the impact on electrical power utilities was likely considered as well. In the exact same way, IoT security standards need to be developed both for the safety of an individual using a device, as well as the potential impact on the newest utility in our lives: the Internet.

To be fair, not all IoT security issues involve the possibility of immediate physical harm that electrically powered devices have, but some do. Plus, the potential societal disruption and associated physical threats that an IoT-driven security problem can cause could be much more widespread than any individual device could create.

Of course, the challenge of creating any kind of security standard is determining what exactly would be included and how it would be measured. Security is a significantly more complicated and nuanced topic than the spread of an electrical charge, but that doesn’t mean the effort shouldn’t be undertaken. It’s just going to take a lot more effort from more people (and companies).

Thankfully, there are several efforts being driven by individual companies to help address some of these security concerns. Chip IP company ARM, for example, whose technology is at the heart of an enormous number of IoT devices, recently added new levels of hardware security to its line of Cortex M microcontrollers. In addition, concepts like a hardware root of trust, trusted execution environments, biometric authentication and more are all being actively deployed by a variety of component and device vendors that feed into the IoT supply chain. While they won’t solve all security issues, leveraging these technologies as a starting point would seem to be a pragmatic approach.

In addition to setting those requirements, determining who administers the testing would have to be resolved. Logically, companies like UL and other members of the Nationally Recognized Testing Laboratories (NRTL) Program would be good choices. A strongly related development would also have to come from those companies who sell and/or install these types of devices. Technically, UL approval is not required to sell a device in the US, for example, but practically speaking, retailers and others who sell these devices are unwilling to accept them without some kind of approval for fear of potential insurance risks. An IoT security standard would require a similar level of support (and initial willpower) to be effective.

It’s certainly naïve to think that a single type of security standard could possibly stave off all the potential security threats that IoT devices are now raising. But it’s equally naïve to believe that nothing can or should be done about the problem. The task won’t be easy and early iterations may not be great, but it’s clear that the time has come to do something. Let’s hope some industry associations and other parts of the tech ecosystem have the guts to get an IoT security standard started and the will to stick it out.

Podcast: Microsoft Surface and Apple MacBook Events

In this week’s Tech.pinions podcast Ben Bajarin, Carolina Milanesi and Bob O’Donnell analyze the big events this week held by Microsoft and Apple, discussing their latest PC offerings and getting a glimpse into their future strategies.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Indefatigable PC

By all rights, it should be dead by now. I mean, really. A market based on a tech product that first came to market over 35 years go?

And yet, here we stand in the waning days of October 2016 and the biggest news expected to come out of the tech industry this week are PC announcements from two of the largest companies in the world: Apple and Microsoft. It’s like we’re in some kind of a weird time warp. (Of course, the Cubs are poised to win their first World Series in over 100 years, so who knows?)

The development must be particularly surprising to those who bought into the whole “PC is dead” school of thought. According to the proselytizers of this movement, tablets should have clearly taken over the world by now. But that sure didn’t happen. While PC shipments have certainly taken their lumps, tablets never reached anything close to PCs from a shipments perspective. In fact, tablet shipments have now been declining for over 3 years.

After tablets, smartwatches were supposed to be the next generation personal computing device. Recent shipment data from IDC, however, suggests that smartwatches are in for an even worse fate than tablets. A little more than a year-and-a-half after being widely introduced to the market, smartwatch shipments are tanking. Not exactly a good sign for what was supposed to be the “next big thing.”

Of course, PCs continue to face their challenges as well, particularly consumer PCs. After peaking in Q4 of 2011, worldwide PC shipments have been on a slow steady decline ever since. Interestingly, however, US PC shipments have actually turned around recently and are now on a modestly increasing growth curve.

The reason for this is that PCs have continued to prove their usefulness and value to a wide range of people, especially in business environments. PCs are certainly not the only computing device that people are using anymore, but for many, PCs remain the go-to productivity device and for others, they still play an important role.

To put it simply, there’s just something to be said for the large-screen computing experience that only PCs can truly provide. More importantly, it’s not clear to me that there’s anything poised to truly replace that experience in the near term.

Another big reason for the PC’s longevity is that it has been on a path of constant and relatively consistent evolution since its earliest days. Driven in part by the semiconductor manufacturing advances enabled by Moore’s Law, a great deal of credit also needs to be given to chip designers at Intel, AMD and nVidia, among others, who have created incredibly powerful devices. Similarly, OS and application software advances by Apple, Microsoft and many others have created environments that over a billion people are able to use to work, play and communicate with on a daily basis.[pullquote]PCs have actually never been stronger or more attractive tech devices—it’s more like a personal computer renaissance than a personal computer extinction. “[/pullquote]

There have also been impressive improvements in the physical designs of PCs. After a few false starts at delivering thin-and-light notebooks, for example, the super-slim ultrabook offerings from the likes of Dell (XPS13), HP (Spectre X360) and Lenovo (ThinkPad X1) have caught up to and arguably even surpassed Apple’s still-impressive MacBook Air. At the same time, to the surprise of many, Microsoft’s Surface has successfully spawned a whole new array of 2-in-1 and convertible PC designs that has brought new life to the PC market as well. It’s easy to take for granted now, but you can finally get the combination of performance, weight, size and battery life that many have always wanted in a PC.

Frankly, PCs have actually never been stronger or more attractive tech devices—it’s more like a personal computer renaissance than a personal computer extinction. The fact that we’ll likely be talking about the latest additions to this market later this week says a great deal about the role that PCs still have to play.

Podcast: Dell-EMC World, Le Eco Launch, Apple Car

In this week’s Tech.pinions podcast Jan Dawson, Carolina Milanesi and Bob O’Donnell discuss the recent Dell/EMC World event, debate the potential impact of new Chinese content/device maker LeEco, and analyze recent news around Apple’s rumored car efforts.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Can IT Survive?

If you’ve ever worked at a business with at least 20 employees, you’ve undoubtedly run into “them”—the oft-dreaded, generally misunderstood, secretly sneered at (though sometimes revered) IT department. The goal of Information Technology (IT) professionals, of course, is to provide companies and their employees with the technical tools they need to not only get their jobs done, but to do so in an increasingly fast, flexible manner.

Frankly, it’s a tough, and often times thankless job. If your computer stops working, the network goes down, or some aspect of the company web site stops functioning, IT gets the brunt of the frustration that inevitably occurs. Beyond these day-to-day issues, however, IT is also tasked with driving changes to the infrastructure that underlie today’s modern businesses.

For that reason, IT has long been considered a strategic asset to most organizations. In fact, this central role has also turned the CIO—who typically runs IT—into a critical member of many business organizational structures.

But the situation for IT (and CIOs) appears to be changing—ironically because of some of the very same factors that led to its rise: most notably, the need for increased agility and flexibility.

The problem is, after several years (or more) of IT driven technological initiatives designed to improve reliability, increase efficiency, and reduce costs for key business processes, a large percentage of these companies have come to realize that the best solution is to have someone else outside the company take over. From more traditional business process outsourcing, through the evolution of nearly everything “as a service,” to the growth of public cloud computing resources, we’re witnessing the trickle of projects leaving the four walls of an organization grow into a fast-moving stream. As a result, IT departments are often doing less of the technical work and more of the management. In the process, though, they’re moving from a strategic asset to a growing cost center.

The implications of this change are profound, not only for IT departments, but to the entire industry of companies who’ve built up businesses designed to cater to IT. All of a sudden, equipment suppliers have to think about very different types of customers, and IT departments have to start thinking about very different types of partners. Arguably, it’s also driving the kinds of consolidations and new partnerships between these suppliers that seem to be on the rise.[pullquote]All of a sudden, equipment suppliers have to think about very different types of customers, and IT departments have to start thinking about very different types of partners.”[/pullquote]

The causes for these kinds of changes are many. Fundamentally, the revolution in the technology side of the business computing world has been even more extensive over the last few years than many first realized. To put it another way, though we’ve been hearing about the impact of the cloud seemingly forever, it’s only now that we’re really starting to feel it in the business computing world.

Another cause is an interesting bifurcation in the challenges and complexities of the products and services that have traditionally sat under the watchful eye of the IT department. On the one hand, many previously complex technologies and systems that required specialized IT expertise have become easy enough for non-IT line of business leaders to purchase and successfully deploy. Converged and hyperconverged appliances, for example, have brought datacenter-grade compute, networking and storage capabilities into a single box that even moderately technically people can easily manage through a simple interface.

In addition, managed service providers, hosted data exchanges, public cloud providers and a host of other companies that didn’t even exist just a few years back are offering utility-like computing services that, again, are offering increasingly easy solutions for business departments and other non-technical divisions of a company to quickly and economically put into production. More importantly, they’re doing it at a significantly faster pace than what many overburdened and highly process-driven IT organizations can possibly achieve.

Some IT professionals are dubious (and highly concerned) about these type of rogue shadow IT initiatives, but they don’t appear to be slowing down. In fact, in the case of a hot new area like Enterprise IoT, research has shown that it’s often a branch of a company’s Operations department (sometimes even called OT, or Operations Technology) that’s driving the deployment of devices like smart gateways and other critical new IoT technologies—not the IT department.

At the other technological extreme, many companies are also finding that making the move to more cost-effective and more agile cloud-based solutions is actually proving to be much more technically complex and challenging than first thought. In fact, there’s recently been talk of a slowdown within some companies’ efforts to move more of their compute, software and services offerings to the cloud because of the lack of internal skill sets within IT to handle these new kinds of tasks. In addition, much of the most advanced computing work, in areas such as machine learning, AI and related areas, often requires access to specialized hardware and software that many companies don’t currently have.[pullquote]Many IT departments are finding themselves in an awkward position in the middle where the now-easier tasks no longer require their help, and the tougher tasks take a larger supply of employees with skill sets or resources they don’t currently have.”[/pullquote]

The result is that many IT departments are finding themselves in an awkward position in the middle where the now-easier tasks no longer require their help, and the tougher tasks take a larger supply of employees with skill sets or resources they don’t currently have. Ironically, the very technology that started to create new opportunities for IT professionals (and which many feared would take away more traditional jobs) is poised to now start taking back jobs from IT as well. Needless to say, it’s a tough spot to be in.

Despite these concerns, however, there is still clearly an important role for IT in businesses today—it’s just becoming much different than what it used to be. For CIOs and IT to succeed, it’s going to take a different way of thinking. For example, instead of evaluating products, it’s increasingly going to require evaluating and managing partners and services. Instead of sticking with slow, burdensome, “we’ll build it here” types of internal processes, it’s going to require a willingness to explore more external options.

The importance of technology in business will only continue to increase over time. As technological solutions become more ubiquitous, however, the concept of distributed responsibility for these solutions will likely become the new reality.

Podcast: PC Shipments, PlayStation VR

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the recent PC shipment and forecast numbers from IDC and Gartner, and analyze the impact of Sony’s PlayStation VR on the overall virtual reality device market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Galaxy Note 7: The Death of a Smartphone

It’s hard to imagine a much worse scenario.

The world’s leading smartphone company debuts a new device that initially is touted as one of the best smartphones ever made. Glowing reviews quickly follow and the company’s prospects for a strong fall and holiday season, and the opportunity for regaining some lost market share, seem nearly assured.

But then a small number of the phones start to overheat and catch fire. The company tries to react quickly and decisively to the concern and issues a recall of several million already shipped devices. It’s a somewhat risky and certainly expensive move, but the company initially receives praise for trying to tackle a challenging problem in a positive way.

Customers are reassured that the problem seems to lie not in the phone itself, but in a battery provided by one of the company’s third-party battery suppliers (ironically, most believe the culprit to be Samsung SDI—a sister company of Samsung Electronics).

And then, the unthinkable. Replacement phones start to show the same problems and the company is forced to stop the production and sale of the device, encourage its telco and retail partners to stop selling it, and tell all its existing customers to stop using it. Just to add insult to injury, the US Consumer Products Safety Commission (CPSC) also sends out notes to consumers encouraging them to stop using the device, while the Federal Aviation Administration (FAA) and major airlines around the world reinforce the message they’ve been saying for the last several weeks on virtually every airplane flight in the world: don’t use, charge or even turn on your Samsung Galaxy Note 7.

It’s probably the most negative publicity a tech product has ever seen. The long-term impact on the Samsung brand is still to be determined, but anyone who’s looked at the situation at all knows it can’t be good. At this point, it appears that the Note 7 will likely end up being removed from the market, costing Samsung billions of dollars, and there’s even been some concern expressed about Samsung’s ability to save/sustain the Note sub-brand.

Part of the issue isn’t just the product itself—although that’s certainly bad enough—but the manner in which the company is now handling it. Reaction has quickly moved from praise for Samsung’s initial quick efforts to address the issue, to disbelief that they could let a second round of faulty products that are this dangerous get out the door.

On top of that, there are many unanswered questions that need to be addressed. From a practical perspective, what is the cause of the problems if it isn’t the battery cell (the charging circuits?) and what other phones might face the same dangerous issues? Why did Samsung rush out the replacement units without actually figuring out what the real cause was? What kind of testing did they do (or not) to be sure the replacements were safe?

Beyond these short-term issues, there are also likely to be some bigger questions that could have a longer-term impact on the tech market. First, what types of procedures are in place to prevent this? What governmental or industry associations, if any, can take responsibility for this (besides Samsung)? Will products need to go through longer/more thorough testing procedures before they’re allowed on the market? Will product reviewers need to start doing safety tests before they can really make pronouncements on the quality/value of a product? How can vendors and their suppliers work to avoid these issues and what mechanisms do they have in place should it happen again to another product?

Some might argue that these questions are an over-reaction to a single product fault from a single vendor. And, to be fair to Samsung, there have certainly been reported cases of other fire and safety-related issues with electronics products from other vendors, including Apple, over the last few years.[pullquote]Our collective dependence on battery-driven devices is only growing, so it may be time to take a harder, more detailed look at safety-related testing and requirements.[/pullquote]

But when people’s lives and health are at stake—as they clearly have been with some of the reported Galaxy Note 7-related problems—it’s not unreasonable to question whether existing policies and procedures are sufficient. Our collective dependence on battery-driven devices is only growing, so it may be time to take a harder, more detailed look at safety-related testing and requirements.

Given the breakneck pace and highly competitive environment for battery-powered devices, there will likely be industry pushback against prolonged or more expensive testing. As the Galaxy Note 7 situation clearly illustrates, however, speed doesn’t always work when it comes to safety.

Finally, the tech industry needs to take a serious look at these issues themselves, and figure out potential methods of self-policing. If they don’t, and we start hearing a lot more stories about other devices exploding, catching on fire or causing bodily harm, you can be assured that some politician or governmental agency will use the collective news to start imposing much more challenging requirements.

As the old saying goes, better safe, than sorry.

Podcast: Google Hardware Event

In this week’s Tech.pinions podcast Carolina Milanesi, Ben Bajarin and Bob O’Donnell analyze the announcements coming out of Google’s recent hardware event, including their Pixel smartphones, Daydream VR Headset and Google Home smart speaker.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Service Providers Still Act Like Utilities

If you ever want to enliven a cocktail party filled with executives from the telecommunications or cable industry, just start talking about dumb pipes. As in, “your service doesn’t offer anything more than a simple connection from my devices to the internet content I want—it’s a dumb pipe.”

Of course, most of you will never have to worry about going through such an awkward social encounter, but if you do—that zinger is bound to get things going.

All kidding aside, the notion that telco carriers and other service providers have provided little more than basic connectivity has been an industry hot button for quite some time. Even now, despite a number of efforts to spice things up, most telcos and cable service providers are seen as companies that provide a very indistinct connectivity service that people only reluctantly pay for.

The primary differentiators for competitive players in this space are price, price and, oh yeah, price, with maybe a bit of coverage or service quality thrown in for good measure. It’s little wonder that many consumers hold these companies in such low esteem—they just don’t see the value in the services beyond basic connectivity. It’s also not surprising that so many people are looking at cord-cutting, cord replacement, or other options that attempt to cut these service providers out of the picture.

But it doesn’t have to be this way.

The amount of data that telco and cable service providers have access to should allow them to generate some very interesting, useful and valuable services that consumers should be happy to pay for. Now, admittedly, there are some serious privacy and regulatory concerns that have to be taken into consideration, but with appropriate anonymizing techniques, there are some very intriguing possibilities.

For example, by leveraging new machine learning or artificial intelligence algorithms, service providers should be able to aggregate data usage patterns to help determine everything from traffic patterns, to breaking news algorithms, program recommendation engines, and much more.

At a more basic level, who better to manage things like my contacts, or offer an intelligent, unified communications service that lets me see and manage all my various forms of communication, than the companies over whose network those messages travel?

Ironically, for those who are particularly privacy sensitive, the notion of paying for a highly secure, completely anonymized truly “dumb pipe” could also be an attractive option. While certain levels of privacy and security should be expected (nee demanded) from service providers, the notion of paying for extra security is something I believe most consumers will start to really appreciate.

More critically, there is a crying need to provide some kind of smart hub inside our homes so that we can easily see, connect and manage all the potential connected devices and services in our homes: from smartphones, PCs and tablets, to TVs, lights, HVAC controls and even smart cars. But instead of offering an intuitive, friendly device similar to something I wrote about a few weeks back (“Rethinking Smart Home Gateways”), service providers continue to offer non-descript black boxes whose very designs belie their archaic, impenetrable means of operation.[pullquote]The fundamental problem is that service providers act more like utilities than companies that offer services people are happy to pay for, such as Netflix. [/pullquote]

The fundamental problem is that service providers act more like utilities than companies that offer services people are happy to pay for, such as Netflix. There’s little sense of personalization or differentiation from service providers and the aforementioned router/gateway boxes they currently force into consumers’ homes are a classic example of that utility-style of thinking. Honestly, if your power company was to put a box into your home, do you think it would look much different?

In order to break this cycle, and avoid the risk of being cut straight out of people’s lives through various types of cord-cutting/replacement mechanisms, service providers need to start thinking very differently about the types of services they offer. They need to create, discover and deliver services that people actually value, and do so in a more personal, non-utility like way.

To their credit, a number of the major US telco and cable providers are making efforts to reach these goals, but they still primarily reflect a utility mindset. To break that means of thinking, they would be wise to look at how providers of services on the Internet—whether that be someone like Spotify, Uber, or Amazon—build and sell the kinds of services that consumers are more than happy to pay for. Only with that kind of out-of-the-box thinking can they truly move past their utility-driven focus and stop being little more than “dumb pipes.”

Podcast: Blackberry, Microsoft AI

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss Blackberry’s announcement of exiting the phone business, and analyze Microsoft’s recent efforts around Artificial Intelligence initiatives from their Ignite conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Andromeda Strain

Veteran fans of thriller author Michael Crichton may recall that his career kicked into high gear with the 1969 release of a novel entitled “The Andromeda Strain.” The book described the impact of a deadly microbe strain delivered to earth from space via a military satellite.

Next week in San Francisco, Google is heavily rumored to announce the release of a new strain of operating system codenamed “Andromeda.” The new OS is expected to combine elements of Chrome with Android. Unlike current efforts to bring support for Android apps into Chrome, however, the new Andromeda OS is expected to bring some of the desktop-like capabilities of Chrome into Android to form a super OS that could work across smartphones, tablets, and notebook-style form factors.

Though details remain sketchy, the new OS is expected to offer true multi-modal windowing, as well as a file system and other typical accoutrements for a desktop-style operating system. In essence, this means that Google’s next OS—expected to be released late this year or sometime next year—will be able to compete directly with Microsoft Windows and MacOS X.

On many levels, the development of a single Google OS is an obvious one. In fact, I (and many others) thought it was something they needed to do a long time ago. Despite that, its impact is bound to be profound, and cause a fair amount of stress and, yes, strain, for users, device makers and developers alike.

For consumers and other end users, Andromeda will first appear as yet another OS option, because Google isn’t likely to immediately drop standalone versions of Android or Chrome OS after the announcement or release of Andromeda. Over time, as the transition to Andromeda is complete, those potential concerns will fade away, and consumers, in theory at least, should get a consistent experience across devices of all shapes and sizes. This would be a clear benefit for users, because they should have access to a single set of applications, consistent access to all their data, and all the other obvious benefits of combining two choices into one.

At the same time, however, the transition could end up taking several years, which is bound to cause confusion and concern for end users. Trying to choose which devices and operating systems to use, particularly as device lifetimes lengthen, could prove to be frustrating. Plus, if Google does move away from Chrome, as some have suggested, existing Chromebooks become relatively useless.

For device makers, Andromeda could represent an exciting new opportunity to sell new form factors, such as clamshell, convertible, or detachable notebooks running the new OS. They may also be able to create true “pocket computers” that come in a smartphone form factor, but offer support for desktop monitors and other peripherals, similar to Microsoft’s Continuum feature for Windows 10 Mobile.[pullquote]The launch of a new OS from a major industry player is always fraught with potential concerns, but the merger of two existing options (including the most widely used OS in the world) into a single new one heightens those concerns exponentially. [/pullquote]

Initially, however, Andromeda is going to be more of a challenge for device makers because of their need to deal with product categories like Chromebooks, that could potentially go away. Plus, like Microsoft, Google seems to be moving aggressively towards doing its own branded hardware products, and that takes away potential market opportunities for some of its partners. At the same time, the launch of a new OS with new capabilities and new hardware requirements seems like the perfect time for Google to make a serious play into their own branded devices.

For developers, Andromeda will undoubtedly prove to be a strain for a longer period of time because of their likely need to rewrite or at least rework their applications to take full advantage of the new features and capabilities that will inevitably come with a new OS. Plus, any confusion that consumers face about which version of the different Google OS’s to use will negatively impact future app sales and, potentially, development.

The launch of a new OS from a major industry player is always fraught with potential concerns, but the merger of two existing options (including the most widely used OS in the world) into a single new one heightens those concerns exponentially. As with Mr. Crichton’s book, the initial drama and tension are bound to be high, but eventually, I think we’ll see a positive ending.

Apple’s Missed Audio Opportunity

Apple has a long, rich history in the fields of music and audio and its complex and highly influential relationship with those fields was on display once again at the company’s recent iPhone 7 launch event.

The biggest audio-related news of the event was, of course, the removal of the traditional 3.5 mm headphone jack from the iPhone 7. The impact of that one decision will be rippling through the audio industry for years to come. Why? Because of the level of influence Apple and the iPhone have, both with other smartphone makers and with audio accessory and component makers.

The problem is the implications of the move on audio quality are not likely to be good for most people. For all of its convenience, wireless audio connections are generally lower quality than wired connections because of the need to compress the file over the available wireless bandwidth. Given most people are starting with highly compressed MP3 or AAC-encoded music files to begin with, that essentially means you’re degrading an already degraded signal. Not good.

Now, admittedly, there is debate on how much of a difference many people can hear across different levels of audio encoding algorithms as well as wireless transmission compression methods, but common sense tells you mixing the two together can’t be good. (And to be clear, yes, I think most everyone would be able to hear the difference between a wired connection of an uncompressed file and a wireless connection of a compressed file.)

Plus, you don’t see anyone saying, “Oh well, HD video is good enough because that’s the maximum resolution of the iPhone’s screen, so why bother with 4K video? Why should audio be treated differently?” The ability to deliver the highest possible raw media quality—regardless of the device upon which it is played back—should be the goal of any media playback device, but particularly one that’s so incredibly influential.

Of course, some of this harkens back to Apple’s largest impact on music: the creation of the iPod/iTunes combination that completely rewrote the rules on music distribution. The iPod created an amazing level of convenience, flexibility and portability for music that is hard to imagine not having on all our devices today.

However, the iPod also sacrificed audio quality for convenience and the implications of its focus on highly compressed music extend to today. The big problem in the early days of digital music was sound files were very large and took up too much storage capacity in uncompressed, CD-quality form. Audio encoding techniques like MP3 and AAC offered 10x reductions in file size, while leveraging a variety of psychoacoustic techniques to keep the music still sounding reasonably good. It was just too tempting a tradeoff to pass up.

Today, however, storage costs are significantly lower and network bandwidth speeds are significantly higher, so there’s no longer a really viable technical reason to stick with compressed audio. Yet, compressed audio still dominates the landscape, primarily because of Apple’s initial and ongoing influence.

With the company’s efforts and investments in growing their Apple Music service—which they mentioned has now reached 17 million subscribers at the beginning of the iPhone 7 launch event—there is a clear opportunity to once again set a new standard for audio file formats. By choosing to offer uncompressed CD-quality (16-bit, 44 kHz) digital audio files—or even better, high-resolution 24-bit, 96 or 192 kHz—as standard, they could single-handedly and dramatically improve the state of digital audio quality around the world. Now, that would take courage.

Of course, there’s also the possibility that the rumors of Apple purchasing Tidal—a music streaming service that offers uncompressed and high-res audio streaming—could come to pass and Apple would “inherit” the capability.

In addition to improving the quality of the audio files, Apple could have used the announcement of the headphone jack removal to highlight the second part of the audio quality equation—the quality of the connection to headphones and speakers. Though few know it, Apple’s proprietary Lightning connector supports the ability to transmit uncompressed and even high-resolution audio in digital format to external devices. Essentially, it provides raw access to the files before they’re converted from digital into audible analog format. In addition, Lightning can provide power for enabling features like noise cancellation without a battery in connected headphones, and access to additional controls, such as triggering Siri. Frankly, it’s a powerful though underutilized interface. Part of the problem is using Lightning requires paying a royalty to Apple, whereas using the 3.5mm audio jack never did.

While Apple kept an audio DAC (digital-to-analog convertor) in the iPhone 7 for the built-in speakers, with Lightning-based headphones, that digital-to-analog conversion needs to be done by headphones or other speakers directly connected to the Lightning jack.

While that does add costs to these devices, the good news is, this allows peripheral companies like Sony, Philips, JBL, Audeze and others to build headphones that leverage high-quality DACs and produce really great sound—depending, of course, on the original resolution of the file being converted—hence my earlier comments. Though details remain unclear, the new Apple Lightning-based earbuds included with the iPhone 7s have none of these extended features and likely use the same more generic-quality DAC that Apple uses in the iPhone.

What’s odd, and perhaps telling, about Apple’s commitment to higher-quality audio is they now own one of the best-selling headphone makers in the world in Beats and yet, they don’t currently offer a single set of Beats headphones with a Lightning connector and external DAC. Even if Apple wanted to somehow keep the removal of the headphone jack a secret from Beats staffers, there’s no reason they couldn’t have encouraged the development of a set of high-quality, Lightning-based Beats headphones. Yet none exist, nor did Apple even announce one.

Instead, they focused their efforts on announcing wireless Beats headphones based on Bluetooth and some proprietary extensions enabled by their new W1 chip. While there’s obviously nothing wrong with that, the new Solo3 and other Beats headphones seem once again to be focused on convenience over audio quality. In theory, later versions of Bluetooth could support wireless transmission of uncompressed audio, which takes 1.41 Mbits per second, but most Bluetooth audio leverages 128K-256 kbits per second compressed audio. Apple also chose not to support Qualcomm’s AptX technology (originally developed by Bluetooth silicon maker CSR that Qualcomm acquired), which offers support for high-quality audio streamed over Bluetooth.[pullquote]For a company that talks a lot about how much they love music, Apple sure doesn’t seem to care that much about audio quality, and that’s frustrating.[/pullquote]

The new Apple AirPods offer similar capabilities, limitations, and likely, audio quality (though much shorter battery life). Again, the focus is on convenience over quality. If Apple had developed some new higher-quality, lossless methods of transmitting audio to these W1-equipped devices, they clearly would have touted it, yet they didn’t. Instead, much of the focus and concerns around the AirPods were on the possibility of losing them. For the record, I believe this is a big issue but not as much of one when you’re wearing them as when you’re not. Just ask anyone who’s ever misplaced a Bluetooth headset. It happens all the time.

Given how much time Apple spent justifying the removal of the headphone jack at their event, they’re clearly cognizant of what a momentous impact their decision represented and how poorly some might perceive the move. Yet, instead of turning that negative into a positive—as they clearly could have done—they added insult to injury by calling the development courageous. Frankly, it was a missed opportunity of potentially enormous proportions.

The bottom line is, for a company that talks a lot about how much they love music, Apple sure doesn’t seem to care that much about audio quality, and that’s frustrating.

Podcast: Apple iPhone 7, Watch Series 2 Launch Event

In this week’s Tech.pinions podcast Carolina Milanesi, Ben Bajarin, Jan Dawson and Bob O’Donnell discuss Apple’s recent product launch event with detailed analysis of the iPhone 7, Watch Second Edition, and AirPods.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast