Podcast: Microsoft Surface and Consumer Reports, NVIDIA Earnings, Google Diversity Memo

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell chatting on Consumer Reports decision to no longer recommend Microsoft Surface devices, analyzing NVIDIA’s earnings, and discussing Google’s controversial diversity memo and the issues it has raised for Silicon Valley.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

IoT Connections Made Easy

For long-time tech industry observers, many of the primary concepts behind business-focused Internet of Things (IoT) feel kind of old. After all, people have been connecting PCs and other computing devices to industrial, manufacturing, and process equipment for decades.

But there are two key developments that give IoT a critically important new role: real-time analysis of sensor-based data, sometimes called “edge” computing, and the communication and transfer of that data up the computing value chain.

In fact, enterprise IoT (and even some consumer-focused applications) are bringing new relevance and vigor to the concept of distributed computing, where several types of workloads are spread throughout a connected chain of computing devices, from the endpoint, to the edge, to the data center, and, most typically, to the cloud. Some people have started referring to this type of effort as “fog computing.”

Critical to that entire process are the communications links between the various elements. Early on, and even now, many of those connections are still based on good-old wired Ethernet, but an increasing number are moving wireless. Within organizations, WiFi has grown to play a key role, but because many IoT applications are geographically dispersed, the most important link is proving to be wide-area wireless, such as cellular.

A few proprietary standards such as Sigfox and Lora, that leverage unlicensed radio spectrum (that is, unmanaged frequencies that any commercial or non-commercial entity can use without requiring a license) have arisen to address some specific needs and IoT applications. However, it turns out traditional cellular and LTE networks are well-suited to many IoT applications for several reasons, many of which are not well-known or understood.

First, in the often slower-moving world of industrial computing, there are still many live implementations of, along with relatively large usage of, 2G networks. Yes, 2G. The reason is that many IoT applications generate tiny amounts of data and aren’t particularly time-sensitive, so the older, slower, cheaper networks still work.

Many telcos, however, are in the midst of upgrading their networks for faster versions of 4G LTE and preparing for 5G. As part of that process, many are shutting down their 2G networks so that they can reclaim the radio frequencies previously used for 2G in their faster 4G and 5G networks. Being able to transition from those 2G to later cellular technologies, however, is a practical, real-world requirement.

Second, there’s been a great deal of focus by larger operators and technology providers, such as Ericsson and Qualcomm, on creating low-cost and, most importantly, low power wide area networks that can address the connectivity and data requirements of IoT applications, such as smart metering, connected wearables, asset tracking and industrial sensors, but within a modern network environment.

The two most well-known efforts are LTE Cat M1 (sometimes also called eMTC) and LTE Cat NB1 (sometimes also called NB-IoT or Narrowband IoT), both of which were codified by telecom industry association 3GPP (3rd Generation Partnership Project) as part of what they call their Release 13 set of specifications. Cat M1 and NB1 are collectively referred to as LTE IoT.

Essentially, LTE IoT is part of the well-known and widely deployed LTE network standard (part of the 4G spec—if you’re keeping track) and provide two different speeds and power requirements for different types of IoT applications. Cat M1 demands more power, but also supports basic voice calls and data transfer rates up to 1 Mbps, versus no voice and 250 kbps for NB1. On the power side, despite the different requirements, both Cat M1 and NB1 devices can run on a single battery for up to 10 years—a critical capability for IoT applications that leverage sensors in remote locations.

Even better, these two can be deployed alongside existing 4G networks with some software-based upgrades of existing cellular infrastructure. This is critically important for carriers, because it significantly reduces the cost of adding these technologies to their networks, making it much more likely they will do so. In the U.S., both AT&T and Verizon already offer nationwide LTE Cat M1 coverage, while T-Mobile recently completed NB1 tests on a live commercial network. Worldwide, the list is growing quickly with over 20 operators committed to LTE IoT.

In fact, it turns out both M1 and NB1 variants of LTE IoT can be run at the same time on existing cellular networks. In addition, if carriers choose to, they can start by deploying just one of the technologies and then either add or transition to the other. This point hasn’t been very clear to many in the industry because several major telcos have publicly spoken about deploying one technology or the other for IoT applications, implying that they chose one over the other. The truth is, the two network types are complementary and many operators can and will use both.

Of course, to take advantage of that flexibility, organizations also require devices that can connect to these various networks and, in some cases, be upgraded to move from one type of network connection to another. Though not widely known, Qualcomm recently introduced a global multimode modem specifically for IoT devices called the MDM9206 that not only supports both Cat M1 and Cat NB1, but even eGPRS connections for 2G networks. Plus, it includes the ability to be remotely upgraded or switched as IoT applications and network infrastructures evolve.

Like many core technologies, the world of communications between the billions of devices that are eventually expected to be part of the Internet of Things can be extremely complicated. Nevertheless, it’s important to clear up potential confusions over what kind of networks we can expect to see used across our range of connected devices. It turns out, those connections may be a bit easier than we thought.

Podcast: SIGGRAPH AMD and nVIDIA, Apple and Tesla Earnings

This week’s Tech.pinions podcast features Ben Bajarin, Jan Dawson and Bob O’Donnell discussing graphics and AI-related announcements from AMD and nVIDIA made at the SIGGRAPH convention, and the earnings reports from Apple and Tesla.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Smarter Computing

Work smarter, not harder. That’s the phrase that people like to use when talking about how being more efficient in one’s efforts can often have a greater reward.

It’s also starting to become particularly appropriate for some of the latest advances in semiconductor chip design and artificial intelligence-based software efforts. For many years, much of the effort in silicon computing advancements was focused on cramming more transistors running at faster speeds into the same basic architectures. So, CPUs, for example, became bigger and faster, but they were still fundamentally CPUs. Many of the software advancements, in turn, were accomplished by running some of the same basic algorithms and program elements faster.

Several recent announcements from AMD and nVidia, as well as ongoing work by Qualcomm, Intel and others, however, highlight how those rules have radically changed. From new types of chip designs, to different combinations of chip elements, and clever new software tools and methodologies to better take advantage of these chip architectures, we’re on the cusp of seeing a whole new range of radically smarter types of silicon that are going to start enabling the science fiction-like applications that we’ve started to see small glimpses of.

From photorealistic augmented and virtual reality experiences, to truly intelligent assistants and robots, these new hardware chip designs and software efforts are closer to making the impossible seem a lot more possible.

Part of the reason for this is basic physics. While we can argue about the validity of being able to continue the Moore’s Law inspired performance improvements that have given the semiconductor industry a staggering degree of advancements over the last 50 years, there is no denying that things like the clock speeds for CPUs, GPUs and other key types of chips stalled out several years ago. As a result, semiconductor professionals have started to tackle the problem of moving performance forward in very different ways.

In addition, we’ve started to see a much wider array of tasks, or workloads, that today’s semiconductors are being asked to perform. Image recognition, ray tracing, 4K and 8K video editing, highly demanding games, and artificial intelligence-based work are all making it clear that these new kinds of chip design efforts are going to be essential to meet the smarter computing needs of the future.

Specifically, we’ve seen a tremendous rise in interest, awareness, and development of new chip architectures. GPUs have led the charge here, but we’re seeing things like FPGAs (field programmable gate arrays)—such as those from the Altera division of Intel—and dedicated AI chips from the likes of Intel’s new Nervana division, as well as chip newcomers Google and Microsoft, start to make a strong presence.

We’re also seeing interesting new designs within more traditional chip architectures. AMD’s new high-end Threadripper desktop CPU leverages the company’s Epyc server design and combines multiple independent CPU dies connected together over a high-speed Infinity Fabric connection to drive new levels of performance. This is a radically different take than the traditional concept of just making individual CPU dies bigger and faster. In the future, we could also see different types of semiconductor components (even from companies other than AMD) integrated into a single package all connected over this Infinity Fabric.

This notion of multiple computing parts working together as a heterogeneous whole is seeing many types of iterations. Qualcomm’s work on its Snapdragon SOCs over the last several years, for example, has been to combine CPUs, GPUs, DSPs (digital signal processors) and other unique hardware “chunks” into a coherent hole. Just last week, the company added a new AI software development kit (SDK) that intelligently assigns different types of AI workloads to different components of a Snapdragon—all in an effort to give the best possible performance.

Yet another variation can come from attaching high-end and power demanding external GPUs (or other components) to notebooks via the Thunderbolt 3 standard. Apple showed this with an AMD-based external graphics card at their last event and this week at the SIGGRAPH computer graphics conference, nVidia introduced two entries of its own to the eGPU market.

The developments also go beyond hardware. While many people are (justifiably) getting tired of hearing about how seemingly everything is being enhanced with AI, nVidia showed a compelling demo at their SIGGRAPH press conference in which the highly compute-intensive task of ray-tracing a complex image was sped up tremendously by leveraging an AI-created improvement in rendering. Essentially, nVidia used GPUs to “train” a neural network how to ray-trace certain types of images, then converted that “knowledge” into algorithms that different GPUs can use to redraw and move around very complex images, very quickly. It was a classic demonstration of how the brute force advancements we’ve traditionally seen in GPUs (or CPUs) can be surpassed with smarter ways of using those tools.

After seeming to stall for a while, the performance requirements for newer applications are becoming clear—and the amount of work that’s still needed to get there is becoming clearer still. The only way we can start to achieve these new performance levels is with the types of heterogeneous chip architecture designs and radically different software approaches that are starting to appear.

Though some of these advances have been discussed in theory for a while, it’s only now that they’ve begun to appear. Not only are we seeing important steps forward, but we are also beginning to see the fog lift as to the future of these technologies and where the tech industry is headed. The image ahead is starting to look pretty good.

Podcast: AMD Earnings, Microsoft AI Silicon, Samsung, Apple Plants

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing AMD’s quarterly earnings, Microsoft’s announcement of custom AI-enabled chip for the next HoloLens, Samsung earnings, and rumors of Apple building three manufacturing plants in the US.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Value of Limits

No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.

In fact, arguably, the entire tech industry is built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.

But there does come a point—and I think we’ve arrived there—where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits…on several different levels.

On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.

Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.

Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.

Just as we’ve seen the fallout from fake news on all sides of the political perspective, so too are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the timeframes that are often discussed.

In fact, I’d argue that a lot of the current perspectives on where the technology industry is and where it’s headed are based on a variety of false pretenses, some positively biased and some negatively biased. On the positive side, there’s a sense that technologies like AI or autonomous driving are going to solve enormous societal issues in a matter of a few years. On the negative side, there are some who see the tech industry as being in a stagnant period, still hunting for the next big thing beyond the smartphone.

Neither perspective is accurate, but ironically, both stem from the same myth of limitlessness that seems to pervade much of the thinking in the tech industry. For those with the positive spin, I think it’s critical to be willing to admit to a technology’s limitations, in addition to touting its capabilities.

So, for example, it’s OK to talk about the benefits that something like autonomous driving can bring to certain people in certain environments, but it’s equally important to acknowledge that it isn’t going to be a great fit for everyone, everywhere. Realistically and practically speaking, we are still a very long way from having a physical, legal, economic and political environment for autonomous cars to dramatically impact the transportation needs of most consumers. On the other hand, the ability for these autonomous transportation technologies to start having a dramatic impact on public transportation systems or shipping fleets over the next several years seems much more realistic (even if it is a lot less sexy).

For those with a more negative bias, it’s important to recognize that not all technologies have to be universally applicable to make them useful or successful. The new relaunched Google Glass, for example, is no longer trying to be the next generation computing device and industry disruptor that it was initially thought to be. Instead, it’s being focused on (or limited to) work-based applications where it’s a great fit. As a result, it won’t see the kind of sales figures that something like an iPhone will, but that’s OK, because it’s actually doing what it is best designed to do.

Accepting and publicly acknowledging that certain technologies can’t do some things isn’t a form of weakness—it’s a form of strength. In fact, it creates a more realistic scenario for them to succeed. Similarly, recognizing that while some technologies are great, they may not be great for everything, doesn’t mean they’re a failure. Some technologies and products can be great for certain sub-segments of the market and still be both a technical and financial success.

If, however, we keep thinking that every new technology or tech industry concept can be endlessly extended without limits—everything in my life as service, really?—we’re bound to be greatly disappointed on many different levels. Instead, if we view them within a more limited and, in some cases more specialized, scope, then we’re much more likely to accurately judge what they can (or cannot) do and set expectations accordingly. That’s not a limit, it’s a value.

Podcast: Microsoft Earnings, Samsung Bixby, Google Glass

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the recent earnings report from Microsoft, the release of Samsung’s Bixby Voice-based UI, the re-introduction of Google Glass, and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Tech in the Heartland

Having just spent a few weeks vacationing in a part of the US where an abundance of corn and limestone-filtered water, along with a predilection for distilled beverages led to the creation of our country’s most famous native spirit—bourbon—I’ve regained a sense of life’s priorities: family, food and fun. (And for the record, the Kentucky Bourbon Trail is a great way to spend a few days exploring that part of the world—especially if you’re a fan of the tantalizing golden brown elixir.)

Of course, while I was there, I also couldn’t help noticing what sort of technology was being used (or not) and how people think about and use tech products in that part of the country.

Within the many distilleries I visited, the tech influence was relatively modest. Sure, there were several temperature monitors on the mash and fermentation vats, a few industrial automation systems, and I did see one control room with multiple displays and a single Dell server rack that monitored the process flow of one the largest distilleries, but all-in-all, the process of making bourbon is still decidedly old school. And for the record, that just seems right.

As with many traditional industries, the distilled spirits business has begun to integrate some of the basic elements of IoT technologies. I have no doubt that it’s modestly improving their efficiency and providing them with more raw data upon which they can do some basic analysis. But it also seems clear that there are limits to how much improvement those new technologies can make. With few exceptions, the tools in place appeared to be more focused on codifying and regulating certain processes than really driving major increases in production.

Ensuring consistent product quality and maximizing output are obviously key goals across many different industries, but the investments necessary to reach these outcomes and the return on investment isn’t necessarily obvious for any but the largest companies in these various industries. And that’s a challenge that companies offering IoT solutions are going to face for some time.

What became apparent as I observed and thought about what I saw was that the technology implementations were all very practical ones. If there was a clear and obvious benefit to it, along with a comfort factor that made using it a natural part of the distillation, production or bottling process, then the companies running the distilleries seemed willing to deploy it. And if not, well, that’s why there’s still a of traditional manufacturing processes still in place.

That sense of practicality extended to the people I observed as well. People I saw there were using products like smartphones and other devices as much as people on the coasts—heck, my 93-year-old mother-in-law has an Amazon Echo to play her favorite big band music, uses an iPad every day to play games, and maintains a Gmail account to stay in touch with her children, grandchildren, and great-grandchildren—but the emphasis is very much on the practical, tool-like nature of the devices.

I also noticed a wider range of lower-cost Android phones and less iPhones being used. Of course, much of that is due to income discrepancies. The median household income in the commonwealth of Kentucky is $43,740, which is 19% lower than the US median of $53,889 according to the latest US Census Bureau data, and almost half as low as the San Francisco county median income of $81,294. Given those realities, people in many regions of the US simply don’t have the luxury to get all the latest tech gadgets whenever they came out. Again, they view tech products as more practical tools and expect them to last.

There’s also a lot more skepticism and less interest in many of the more advanced technologies the tech industry is focused on. Given the limited public transportation options, cars, trucks and other forms of personal transportation are extremely important in this (and many other) part(s) of the country—I’m convinced I saw more car dealership ads on local TV and in local newspapers than I can recall seeing anywhere—but there’s absolutely zero discussion of any kind of semi-autonomous or autonomous driving features. People simply want good-looking, moderately priced vehicles that can get them from point A to point B.

In the case of AI and potential loss of jobs, perhaps there should be more concern, but from a practical perspective, the bigger worries are about factory automation, robotics and other types of technologies that can replace traditional manufacturing jobs, which are more common in many parts of middle America.

Also, the idea that somehow nearly everything will become a service seems extraordinarily far-fetched in places similar to where I visited. That isn’t to say that we won’t see service-like business models take hold in major metropolitan areas. However, it’s much too easy to forget that most of the country, let alone the world, is not ready to accept the idea that they won’t really own anything and will simply make ongoing monthly payments to untold numbers of companies providing them with everything they need via an equally large number of individual services.

As Facebook’s Mark Zuckerberg has started to explore, an occasional view outside the rose-colored perspective of Silicon Valley can really help shape your perspective on the real role that technology is playing (or might soon play) in the modern world.

Business Realities vs. Tech Dreams

Never underestimate politics.

No, not the governmental type, but the kind that silently (or not so silently) impacts the day-to-day decisions made in businesses of all sizes, and personal relationships of all stripes.

Even in the seemingly distinct world of technology purchasing, there is often a surprisingly large, though not always obvious, set of key influences that come from decidedly non-technical sources and perspectives.

In fact, one of the more interesting things you realize, the more time you spend in the tech industry, is that good technology is far from a guarantee for product or market success. Conversely, while there are certainly exceptions, a large percentage of product or even complete business failures comes from factors that have little to do with the technologies involved.

Business realities, organizational politics, industry preferences, existing (or legacy) hardware, software, and even people, as well as many other factors that you might not think would have an influence on buying decisions, often are way more important than the technology itself. Unfortunately, there seems to be quite a few people in tech who don’t recognize this, and a lot of them only seem to learn this the hard way.

From great startup ideas to innovative product incarnations from existing players, the number of new products that are thrown out into the world with the thought that the technology is good enough to stand out on its own is still surprisingly high. While I can certainly appreciate this nearly slavish devotion to the disruptive potential that a great technology can have, it’s still kind of shocking how many ideas get funded or get supported with little practical reality for success.

In part, this speaks to the staggering amount of money being lavished on tech-focused entrepreneurs these days thanks to the influence that technology companies are having even on very traditional industries. From the influence of IoT in manufacturing or process industries, to the rewriting of the rules for something as basic as retail groceries, the reach of the tech industry and people involved with it has grown surprisingly wide. As a result, there’s an enormous amount of money being tossed towards tech initiatives, but some of it appears to be done without much thought. Put another way, there sure seems to be a lot of stupid money in tech.

Of course, another reason is that accurately predicting major tech trends has proven to be a challenging exercise for most everyone. For every app store concept blazing a trail of new business opportunities, there’s a lot of 3D TV-like concepts strewn across the side of the road. Given that reality, it certainly makes sense to hedge your bets across a wide range of product and technology concepts to make sure you don’t miss a big new opportunity.

At the same time, companies (and investors) need to spend more time thinking through the tangential, historical, political, social, and yes, personal impacts of a new product or technology before they bring it to market. Arguably, there should be even more time spent on these non-technical aspects than the technical ones, but few companies are willing to make the effort or do the necessary research to really understand these potentially crippling issues.

With enterprise IT-focused products, for example, if a new offering has the potential to improve efficiencies for a given process or department but does so in a way that potentially eliminates the jobs of people in that department, it often doesn’t matter how conceptually cool the technology is because it’s going to hit resistance from existing IT personnel. In fact, some of the biggest challenges in trying to deploy ground-breaking new technologies in businesses are people problems (i.e., political), not technology ones.

In the case of a hot new technology like IoT, it’s not uncommon to find different groups with a particular vested interest within an organization getting into “turf wars” when a new product or technology consolidates previously distinct business segments or departments. Gone are the days when the only part of a business that buys tech-related products is the IT person or department—the lower cost and ease of use of many new tech products and services have democratized their reach—so the potential for these kinds of technological land grabs grows every day.

In the consumer world, the influence of “legacy” products, tech “fashion”, and other non-technical factors can be much bigger than many realize when it comes to consumer purchase choices. Whether it’s the desire or need to work with products that people already own, or a predilection (or disdain) for particular brands, these other non-tech issues are even more important to consumers than they are to business technology buyers.

The bottom line is that the tech purchase process for both businesses and consumers is far from the ivory tower, purely rational set of comparisons that many in the tech industry presume it to be. And, as tech further extends its influence across a wider range of our personal and professional lives, that separation from a simple rational analysis is likely to grow.

Great technology will always be important, but seeing that technology and its potential impact in the right context, and understanding how it may, or may not, fit into existing environments will be an increasingly important factor in determining the ultimate success or failure of many new ventures.

The Power of Hidden Tech

The tech world is dominated by some of the most powerful brands in the world. Companies like Apple, Amazon, Google, Facebook, Netflix, Intel, Samsung and others are featured in the mainstream and business media as much, if not more, than the industrial giants of old. In fact, they’ve become common household names.

They’ve earned their solid reputations through successful products, hard work, and their ability to deliver the kinds of financial results that have made them the darlings of the investment community too.

As impressive and powerful as this group may be, however, they certainly aren’t the only companies in tech doing important work. Though it’s easy to forget, there’s an enormous number of lesser-known tech players that are helping to enable the amazing tech-driven advances that we all enjoy.

At the core, there is an entire range of companies creating the semiconductor chips that sit at the heart not only of our connected devices, but the servers and other infrastructure that enable the cloud-based services to which we’ve all become accustomed. Companies that offer the designs and intellectual property that are used in chip designs, most notably UK-based ARM, but also Synopsys and Imagination Technologies, play an extremely important, but often overlooked, role in driving the modern architectures behind everything from IoT to VR and AI.

Another often ignored step in the chain is for test and measurement technologies. Lesser-known companies like National Instruments are helping drive the components, core technologies, and final products for everything from 5G radios to autonomous cars to industrial IoT and much more.

In semiconductor chips and other components, you have big names like Qualcomm and Nvidia, but there is an enormous range of lesser-known companies building key parts for all kinds of devices. From Texas Instruments (TI) and Renesas in automotive, to Silicon Labs for home networking, to South Korea-based LG Philips Display and Taiwan-based AUO for displays, to Synaptics for fingerprint readers, there’s a huge ecosystem of critical component suppliers.

Even some of the bigger names in semiconductors are branching off into new areas for which they aren’t commonly known. Later today, for example, AMD will be formally unveiling the details of its Epyc server CPU, the first credible threat to Intel’s dominance in the data center in about 10 years. Not to be outdone, Intel is making significant new investments in AI silicon with Nervana and Mobileye for connected cars. Qualcomm’s audio division—part of their little-known acquisition of Cambridge Silicon Radio (CSR) a few years back—just unveiled a complete suite of components and reference designs for smart speakers, like Amazon’s Echo.

In addition to hardware, there is, of course, a huge number of lesser-known software players. Companies like VMWare and Citrix continue to drive cloud-based computing and more efficient use of local data centers through server and application virtualization and other critical technologies. Application development and delivery in the enterprise and in the cloud is being enabled by Docker, a company that offers the ability to split applications into multiple pieces called containers, that can be virtualized, replicated, and much more.

Vendors like Ubuntu are not only enabling user-friendly Linux-based desktops for developers and other enthusiasts, they are also offering powerful Microsoft OS alternatives for servers. In the case of software-defined storage and hyperconverged infrastructure (HCI) server appliances, companies like Nutanix, Pivot3, and others are enabling entirely new software-defined data centers that promise to revolutionize how computing power is created and delivered from public, private, and hybrid clouds.

Though they will likely never get the kind of recognition that the big brand tech players do, the products, technologies, and contributions of these and thousands more lesser-known tech companies play an incredibly critical role in the tech world. By driving many of the key behind-the-scenes developments, these types of companies provide the efficient, safe, and effective tech products and services that have enabled the bigger brands to become such an essential part of our daily lives.

Podcast: Microsoft Surface Laptop, Windows 10S, iPad Pro, Amazon and Whole Foods

This week’s Tech.pinions podcast features Carolina Milanesi, Tom Mainelli and Bob O’Donnell discussing Microsoft’s new Surface Laptop and Windows 10S, the Apple iPad Pro and some of Tim Cook’s comments in his Bloomberg interview, and Amazon’s purchase of Whole Foods.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Computing Evolves from Outside In to Inside Out

Sometimes, the most radical changes come from simply adjusting your perspective.

In the case of computing and the devices we spend so much of our time on, that perspective has almost always been from the outside, where we look into the digital world that smartphones, PCs, and other devices essentially create for our viewing pleasure.

But, we’re on the cusp of one of the most profound changes in how people interact with computers in some time. Why, you ask? Because now, those devices are incorporating data from the real-world around us, and enabling us to see an enhanced version of the outside world from the inside out. In a sense, we’re going from digital data inside to digitally-enhanced reality on the outside.

The most obvious example of this phenomenon is augmented reality (AR), which can overlay internally created digital images onto our devices’ camera inputs from the real world and create a mixed reality combination. In truth, the computer vision technology at the heart of AR has applications in many other fields as well—notably for autonomous driving—and all of them involve integrating real-world data into the digital domain, processing that data, and then generating real-world outcomes that we can physically see, or otherwise experience. However, this phenomena of inside out computing goes way beyond that.

All the sensor data that devices are collecting from the simultaneously profound and meaningless concept of the Internet of Things (IoT) is giving us a whole new perspective on the world, our devices, and even the people around us. From accelerometers and gyroscopes in our smartphones, to microphones in our smart speakers, to vibration sensors on machines, there’s a staggering amount of data that’s being collected, analyzed, and then used to generate information and, in many cases, actions on our behalf.

The process basically involves measuring various aspects of the physical world, converting those measurements into data, computing results from that data, incorporating that data into algorithms or other programs designed to react to them, and then generating the appropriate result or action.

This is where several other key new concepts come together in this new inside-out view of computing. Specifically, machine learning (ML) and artificial intelligence (AI) are at the heart of many of these new data processing algorithms. Though there are many types of ML and AI, in many cases they are focused on finding patterns and other types of logical connections in the data.

In the real world, this means that these algorithms can do things like examine real-world images, our calendar, our documents, the music we listen to, etc., and convert that “input” into more meaningful and contextual information about the world around us. It helps determine, for example, where we should go, what we should eat, who we should meet—the permutations are staggering.

Most importantly, the real-world data that our devices can now collect or get access to can then be used to “train” these algorithms to learn about what we do, where we are, what we like, etc. At its heart, this is what the concept of ambient computing—which is essentially another way to talk about this inside-out computing model—is all about.

As different and distinct as the many technologies I’ve discussed may first appear, they all share this outward projection of computing into the real world. This is a profoundly different, profoundly more personal, and profoundly more valuable type of computing than we’ve ever had before. It’s what makes the future of computing and AI and IoT and AR and all of these components of “contextual computing” so exciting—and so scary.

Never before have we really seen or experienced this extension of the digital world into our analog lives as intensely as we are now starting to see. Sure, there have been a few aspects of it here or there in the past, but we’re clearly entering into a very different type of computing future that’s bound to give all of us a very different perspective.

Podcast: Apple WWDC 2017

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing the many announcements from Apple’s Worldwide Developer Conference including new iPad and Mac hardware, ARKit, Siri, HomePod, iOS 11 and more.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Overlooked Surprises of Apple’s WWDC Keynote

For some, Apple’s WWDC keynote event went liked they hoped, with the company introducing some exciting new products or technologies that hit all the sweet spots in today’s dramatically reshaped tech environment. Augmented reality (AR), artificial intelligence, smart speakers, digital assistants, convolutional neural networks, machine learning and computer vision were all mentioned in some way, shape or form during the address.

For others, the event went like they expected, with Apple delivering on virtually all the big rumors they were “supposed” to meet: updated Macs and iPads, a platform for building AR apps on iOS devices, and a Siri-driven smart speaker.

For me, the event was a satisfying affirmation that the company has not fallen behind its many competitors, and is working on products and platforms that take advantage of the most interesting and potentially exciting new technologies across hardware, software and services that we’ve seen for some time. In addition, they laid the groundwork for ongoing advancements in overall contextual intelligence, which will likely be a critical distinction across digital assistants for some time to come.

Part of the reason for my viewpoint is that there were several interesting, though perhaps a bit subtle, surprises sprinkled throughout the event. Some of the biggest were around Siri, which a few people pointed out didn’t really get much direct attention and focus in the presentation.

However, Apple described several enhancements to Siri that are intended to make it more aware of where you are, what you’re doing, and knowing what things you care about. Most importantly, a lot of this AI- or machine learning-based work is going to happen directly on iOS devices. Just last year, Apple caught grief for talking about differential privacy and the ability to do machine learning on an iPhone because the general thinking then was that you could only do that kind of work by collecting massive amounts of data and performing analysis in large data centers.

Now, a year later, the thinking around device-based AI has done a 180 and there’s increasing talk about being able to do both inferencing and learning—two key aspects of machine learning—on client devices. Apple didn’t mention differential privacy this year, but they did highlight that by doing a lot of this AI/machine learning work on the device, they can keep people’s information local and not have to send it up to large cloud-based datacenters. Not everyone will grasp this subtlety, but for those who do care a lot about privacy, it’s a big advantage for Apple.

On a completely different front, some of Apple’s hardware updates, particularly around the Mac, highlight how serious they’ve once again become about computing. Not only did they successfully catch up to many of their PC brethren, they were demoing new kinds of computing architectures—such as Thunderbolt attached external graphics for notebooks—that very few PC companies have explored. In addition, bringing 10-bit color displays to mainstream iMacs is a subtle, but critical distinction for driving higher-quality computing experiences.

On the less positive front, there are some key questions on the detailed aspects of the HomePod’s audio processing. To be fair, I did not get to hear an audio demo, but conceptually, the idea of doing fairly major processing on a mono speaker of audio that was already significantly processed to sound a certain way on stereo speakers during its creation strikes me as a bit challenging. Yes, some songs may sound pleasing, but for true audiophiles who actually want to hear what the artist and producer intended, Apple’s positioning of the HomePod as a super high-quality speaker is going to be a very tough sell.

Of course, the real question with HomePod will be how good of a Siri experience it can deliver. Though it’s several months from shipping, I was a bit surprised there weren’t more demos of interactions with Siri on the HomePod. If that doesn’t work well, the extra audio enhancements won’t be enough to keep the product competitive in what is bound to be a rapidly evolving smart speaker market.

The real challenge for Apple and other major tech companies moving forward is that many of the enhancements and capabilities they’re going to introduce over the next several years are likely to be a lot subtler refinements of existing products or services. In fact, I’ve seen and heard some say that’s what they felt about this year’s WWDC keynote. Things like making smart assistants smarter and digital speakers more accurate require a lot of difficult engineering work that few people can really appreciate. Similarly, while AI and machine learning sound like exotic, exciting technological breakthroughs, their real-world benefits should actually be subtle, but practical extensions to things like contextual intelligence, which is a difficult message to deliver.

If Apple can successfully do so, that will be yet another surprise outcome of this year’s WWDC.

Podcast: AR and VR, Essential Phone, Apple WWDC Preview

This week’s Tech.pinions podcast features Ben Bajarin, Jan Dawson and Bob O’Donnell discussing developments in augmented reality and virtual reality from the AWE Expo, analyzing the announcements from Andy Rubin’s Essential, and offering a preview of next week’s Apple Worldwide Developer Conference.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Are AR and VR Only for Special Occasions?

As exciting and fast moving as the topics of augmented reality (AR) and virtual reality (VR) may be, there’s a critical question that needs to be asked and thoroughly analyzed when it comes to these technologies.

Are they well-suited for regular use or just special occasions?

While simple on the surface, the answer to the question carries with it key implications not just about the potential size of the market opportunity, but the kinds of products that should be created, the manner with which they’re marketed and sold, and even when different types of products should come to market.

As an early enthusiast of both AR and VR—particularly after having tried several devices, such as Microsoft’s HoloLens and HTC’s Vive—it was (and still is) easy to get caught up in the excitement and potential of the technology. Indeed, the first time you get a demonstration of a good AR or VR headset (and not all of them give a great experience, by the way), you can’t help but think this is the future of computing.

The manner with which VR engulfs your visual senses or AR provides new ways of looking at the world around you are pretty compelling when you first try them. That’s why so many people and companies, from product makers to component suppliers to software makers to retailers are so eager to offer an AR or VR experience to as wide a range of consumers as possible. The thinking is (or has been) that once people try it, they’ll be hooked.

While that’s certainly a valid and worthwhile effort, as time has passed, it’s not entirely clear that merely exposing people to AR and VR is all that’s necessary to achieve the kind of market success that many presumed would occur. In fact, a number of recent consumer studies have highlighted what general market trend observations will also confirm—AR and VR products are indeed growing, but at a slower pace than many (including me) expected.

So, the obvious question is why? Why aren’t more people getting into AR and VR and purchasing more of the products and software that provide the experience?

While there isn’t likely one answer to that question, one can’t help but think about the underlying assumptions that are buried in the title and first question of this column. Is it realistic to think that AR and VR are ready for general use, and if they’re not, is it fair to assume that people are willing to spend good money on something they may only use occasionally?

At its essence, that’s the fundamental question that needs to be answered if we are to understand how the AR and VR markets are likely to evolve.

To be fair, some of the technological limitations facing current products certainly have an impact on the market. Large, clunky, wired headsets are not exactly the stuff of mass market dreams, after all.

But even presuming the technology can be reduced to a manageable or even essentially “invisible” form into regular-sized glasses—and it will still be a long time to really get things that small—is the very fact that it’s in a form that has to be put on our face going to keep it from ever really succeeding?

As we’ve seen with smartwatches, just because technology can be reduced down to a reasonable size and into a well-known form, doesn’t mean people will necessarily adopt it. Even cool capabilities haven’t been able to convince people who’ve never adapted to or cared for wearing a regular watch to don a smartwatch. They just don’t want it.

In the case of glasses, it turns out that over 60% of people do wear some kind of eyeglasses (and another 11% or so wear contacts), but the results vary dramatically by age. For the highly targeted segment under age 40, eyewear usage is less than half of that, meaning nearly ¾ of consumers under age 40 don’t wear corrective eyewear. Trying to convince that group to put something on their face other than for occasional special purposes seems like a daunting task, regardless of how amazing the technology inside it may be.

Even if we get past the form factor issues, there are still potential issues with the supply of engaging content and experiences once the initial excitement over the technology wears off—which it does for most people. A great deal of effort from companies of all shapes and sizes is happening in VR and AR content, so I do expect things to improve, but right now there are a lot more one-time demos than applications with long-term lasting value.

Ironically, I think it could be some of the easiest and simplest types of applications that end up giving AR, in particular, more lasting power and market influence. Simple ways to augment our knowledge or understanding of real world objects or processes will likely seep slowly into general usage and eventually reach the point where we’ll have a hard time imagining life without them. We’re not there yet, though, so for now, I think AR and VR are best suited for special occasions—with appropriate adjustments in market expectations as a result.

Podcast: Microsoft Surface, LeEco, Lenovo, Huawei

This week’s Tech.pinions podcast features Carolina Milanesi, Jan Dawson and Bob O’Donnell discussing numerous events and companies related to China, including Microsoft’s Surface Pro launch in China, Le Eco’s US restructuring, earnings and smartphone shipments from Lenovo, and new PC announcements from Huawei.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Digital Car

The evolution of the modern automobile is arguably one of the most exciting and most important developments in the tech world today. In fact, it’s probably one of the most important business and societal stories we’ve seen in some time.

The leadership at no less venerable a player than Ford Motor Co. obviously felt the same way and just replaced their CEO, despite his long-term tenure with the company, and the record-setting profits he helped drive during his 3-year leadership there. The reason? Not enough progress on advancing the company’s cars forward in the technology domain, particularly with regard to electric vehicles, autonomous driving, and new types of transportation service-focused business models.

As has been noted by many, these three capabilities—electrification, autonomy, and cars as a service—are considered the key trends driving the auto market today and into the future, at least as far as Wall Street is concerned. In reality, the picture isn’t nearly that simple, but it is clear that tech industry-driven initiatives are driving the agenda for today’s carmakers. And it’s pushing many of them into uncomfortable positions.

It turns out, however, that in spite of the importance of this critical evolution of automobiles, this is one of those issues that’s a lot harder to overcome than it first appears.

Part of the problem is that as cars have advanced, and various technologies have been integrated into them, they’ve evolved into these enormously complex machines. Today’s automobiles have as many as 150 programmable computing elements (often called ECUs or Electronic Control Units), surprisingly large (and heavy) amounts of wiring, numerous different types of electronic signaling and interconnect buses, and up to 100 millions of lines of software, in addition to the thousands of mechanical parts required to run a car. Frankly, it’s somewhat of a miracle that modern cars run as well as they do, although reports of technical glitches and other problems in newer cars do seem to be on the rise.

In addition to the mechanical and computer architecture complexity of the cars themselves, the organizational and business model complexity of today’s car companies and the entire auto supply chain also contribute to the problem. Having evolved over the 100+ year history of the automotive industry, the system of multiple Tier 1 suppliers, such as Harman, Delphi, Bosch and others, buying components from Tier 2 and 3 suppliers down the chain and car brand OEMs (such as Ford) piecing together multiple sub-systems from different combinations of Tier 1s to build their cars is notoriously complex.

But toss in the fact that there are often groups within the car maker that are specifically responsible for a given ECU (such as, say, heating, a/c and other “comfort” controls) and whose jobs may be at risk if someone suggests that the company changes to a simpler architecture in which they combined the functionality of multiple ECUs into a smaller, more manageable number and, well, you get the picture.

If ever there was an industry ripe for disruption, and if ever there was an industry in need of a tech overhaul, the automotive industry is it. That’s why many traditional carmakers are concerned, and why many tech companies are salivating at a chance to get a piece of the multi-trillion (yes, with a “t”) dollar global automotive industry.

It’s also why companies like Tesla have made such a splash. Despite their very modest sales, they’re seen as a credible attempt to drive the kind of technological and organizational disruption that many people believe is necessary to transform the automotive industry. In truth, however, because of the inherent and ingrained nature of the auto supply chain, even Tesla has to follow many of the conventions of multiple Tier 1 suppliers, etc., that its rivals use. The problem is that deeply embedded.

But even as those issues get addressed, they are really just a prelude to yet more innovations and opportunities for disruption. Like many modern computing devices—and, to be clear, that’s what today’s cars have become—the technological and business model for autos is slowly but surely moving towards a software and services-focused approach. In other words, we’re moving towards the software-defined “digital car.”

In order for that to happen, several key challenges need to be addressed. Most importantly, major enhancements in automotive security—both through architectural changes and software-driven advances—have to occur. The potential for life-threatening problems if either standard or autonomous cars get hacked should make this point painfully obvious.

Connectivity options, speed and reliability also have to be improved and that’s where industry-wide efforts like 5G, and specific products from vendors like Qualcomm and Intel can make a difference.

Finally, car companies and critical suppliers need to figure out the kinds of services that consumers will be willing to pay for and deliver platforms and architectures that can enable them. Like many other types of hardware devices, profit margins on cars are not very large, and with the increasing amount of technology they’re going to require, they could even start to shrink. As a result, car companies need to think through different ways of generating income.

Thankfully, a number of both tech startups and established vendors, such as Harman, are working on creating cloud-based platform delivery systems for automotive services that are expected to start bringing these capabilities to life over the next several years.

As with any major transition, the move to a digital car model won’t be easy, fast, or bump-free, but it’s bound to be an interesting ride.

Podcast: Google I/O, IoT World

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing Google’s IO event, including details on Google Assistant, Google Home, Android, and AR and VR platforms, along with some brief comments on the recent IoT World conference.

Thanks again to our sponsor, Small.Chat.

You already chat with your team on Slack. Now with Smallchat, you can live chat with customers on your website all from a single Slack channel.

Smallchat harnesses the power of Slack’s customizable notifications and mobile apps to keep you connecting with your customers from any device.

Plus Smallchat can be easily customized to mach your unique brand.

Connect more directly, respond faster, and provide a better customer experience. Get Smallchat Pro for 2 months free by visiting small.chat/techpinions
.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Digital Assistants Drive New Meta-Platform Battle

In case you hadn’t noticed, the OS platform battle is over.

Oh, and nobody really won, because basically, all the big players did, depending on your perspective. Google has the largest number of people using Android, Apple generates the most income via iOS, and Windows still commands the workplace for Microsoft.

But the stakes are getting much higher for the next looming battle in the tech world. This one will be based around digital assistants, such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana and Google’s Assistant, among others.

While much of the initial focus is, rightfully, around the voice-based computing capabilities of these assistants, I believe we’re going to see these assistants expand into text-driven chatbots, AI-driven autonomous software helpers and, most importantly, de facto digital gateways that end up tieing together a wide range of smart and connected devices.

From smart homes to smart cars, as well as smartphones, PCs and wearables that span both our personal and professional lives, these digital assistants will (ideally) provide the consistent glue that brings together computing, services and much more across many disparate OS platforms. In short, they should be able to make our lives better organized, and our devices and services much easier to use. That’s why these assistants are so strategically important, and why so many other companies—from Facebook to Samsung—are working on their own variations.

Another fascinating aspect of these digital assistants is that they have the potential to completely devalue the underlying platforms on which they run. To put it succinctly, if I can use, say, Alexa across an iPhone, a Windows PC, my smart home components and a future connected car, where does the unique value of iOS or Windows 10 go? Out the door….

This overarching importance and distancing from different platforms is why I refer to these assistants as the pre-eminent example of a “meta-platform”: something that provides the potential for expansion, via both APIs for new software development, and the connectivity of a regular platform, but at a layer “above” a traditional OS.

With that thought in mind, it’s interesting to look at recent data TECHnalysis Research collected as part of a nearly 1,000-person survey of US consumers on usage of digital assistants on smartphones, PCs and, the hottest new entrant, smart speakers such as Amazon’s Alexa and Google Home.

As mentioned earlier, in their present incarnations, these digital assistants are primarily focused on voice-based computing and the kinds of applications that are best-suited for simple voice-driven queries. So, to get a better sense of how these assistants are used, respondents were asked in separate questions how often (or even if) they used digital assistants on smart speakers (such as Amazon Echo), smartphones and PCs. The results were combined into the chart below.

What’s fascinating is that, even though the smart speaker category is relatively new (the Echo is less than 2 years old) and Siri, the first smartphone-based digital assistant, arrived in 2011, it’s clear that people with access to a smart speaker like Echo (around 14% of US households according to the survey results) are using digital assistants significantly more than those with smart phones.

While it’s tempting to suggest that this may be due to the perceived accuracy of the different assistants, in a separate question about accuracy, the rankings for Alexa, Siri and Google’s Assistant were nearly identical, meaning there was no one clear favorite. Instead, these results suggest that a dedicated function device placed in a central location within a home simply invites more usage. Translation: if you want to be relevant in these early stages of the digital assistant battle, you need to have a dedicated smart speaker offering.

Of course, the other challenge is that most people are now increasingly exposed to and use multiple digital assistants from multiple players. In fact, 56% of the respondents acknowledged that they at least occasionally (and some frequently) used multiple assistants, with differing degrees of comfort in making the switch between them. The largest single group, 26%, said they were loyal to and consistently used one assistant and ignored the others, but as competition in this area heats up, those loyalties are likely to be tested.

Digital assistant technology has a long way to go, and their current usage patterns only provide some degree of insight into what their long-term capabilities will be. Nevertheless, it’s clear that the meta-platform battle for digital assistants is going to have a significantly broader and longer-lasting impact than the OS platform battles of yore. That, by itself, will make them essential to watch and understand.

(If you’re interested in learning more about the complete study, please feel free to contact me at bob@technalysisresearch.com.)

Podcast: Microsoft Build 2017

This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell discussing Microsoft’s Build 2017 event and the implications it has on platforms and key technologies like AR and VR.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Getting Smart About Smart Speakers

Timing, as they say, is everything. Particularly if you’ve got something to add to an already hot topic that’s reaching fever peak levels this week.

I’m talking, of course, about smart speakers, such as Amazon’s expanding Echo line of products, Google’s Home, the unusual C by GE Sol smart lamp, and the new Microsoft-driven Invoke coming from Harman Kardon, which is now a division of Samsung.

Having just fielded, a little more than a week ago, a brand new TECHnalysis Research study to 1,000 US consumers who own at least some smart home devices, I have some very fresh data to inject into the conversation.
To set the stage, it’s interesting to note that about ¼ of US households now have at least one piece of smart home gear in their possession, according to the study. From smart light bulbs and connected door locks, to home security cameras and beyond, it appears that the smart home phenomenon is finally moving into the mainstream.

Much of that reach, it turns out, is due to recent purchases of smart speakers. In fact, the category is by far the most popular smart home device now in use, with 56% of those smart households reporting that they own and use a smart speaker, and 60% of those purchases occurring in the last six months. (Smart thermostats were the second most common device at 44%, with smart light bulbs third at 30%.)

And use them they do. One-half of the smart speaker-owning respondent base said they use it at least daily (just under one quarter said they use it multiple times per day), and another 39% said they engage with it several times a week. As for what they ask their smart speaker, there are some fascinating differences between user ages, but the top five requests across the entire respondent base are (in order) to play music, for the weather, for news, for basic facts or trivia, and for calendar or scheduling information.

Interestingly, despite the increased usage, the reactions to these devices are decidedly mixed. Smart speakers managed to garner the top spot in both the list of favorite smart home products that respondents own, as well as the list of least favorite smart home products they own. Go figure.

Actually, when you dig into the reasons why they felt that way, it’s clear that most consumers see smart speakers as an exciting and intriguing new product category, but one that still needs improvement. The top reasons for why it was their favorite include most useful, most practical, and easiest to use. The top reasons for why it was their least favorite are least practical, least useful, and hardest to use. Obviously, there’s potential there, but also a lot of work that needs to be done to improve many consumers’ experiences with these devices.

As for market share, the results from the TECHnalysis Research study were nearly identical to the recently reported eMarketer numbers, with Amazon capturing just under 71% of current users, Google Home at roughly 26%, and 3% for Other. How those number shake out through the end of the year, however, remains to be seen.

One of the key expected developments in smart speakers is the addition of a screen, such as in the new Amazon Echo Show, potentially for video calls, but also for other applications. When asked about the potential interest in these other applications, respondents came back with some surprising results. Instead of a full-blown web browser, the top applications they wanted to see were clocks or timers, personal calendar information, weather or news headlines, and media information, such as album art. All of these preferences suggest interest in more of a visual reinforcement of the voice-based information they receive from a smart speaker, and not another visual display-focused device.

The smart speaker category is still in its earliest stages. There are bound to be many more companies, many more devices, many more enhancements, and lots of interesting developments yet to come. It’s clear from this latest research, though, that the category has sparked tremendous consumer interest and will be an incredibly important one to watch for years to come.

(If you’re interested in learning more about the complete study, please feel free to contact me at bob@technalysisresearch.com.)

Podcast: Apple Earnings, Microsoft Windows 10S, Surface Laptop

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the latest Apple quarterly earnings and analyze the announcements from Microsoft’s education-focused event that included the unveiling of a new version of Windows as well as some new laptop PCs.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Hidden Value of Analog

Sometimes, it seems, digital isn’t better. Sure, there are enormous benefits to working with media, files, and devices in the digital domain, but we are, after all, still living in an analog world. As human beings we still touch things with our hands, hear things with our ears, and see things with our eyes—all of which are decidedly (and beautifully) analog reception devices.

In fact, though an increasingly large percentage of our everyday experiences may start out or somehow exist in digital form, none of our interactions with these experiences actually occur in the digital domain. Instead—though it’s very easy to forget—every one of these experiences happen in an extraordinarily high-resolution analog domain (otherwise known as the real world).

While it may seem odd, and maybe even a bit silly, to point this out, as our world becomes increasingly digitized, it’s worth taking a step back to actually notice. It’s also worthwhile to recognize that not all technology-driven pendulums of change always point towards digital. As technology starts to advance, logically it should actually start to become more analog-like.

Indeed, if you look at the history of many innovations in everything from computing to media and beyond, the evolution has started out with analog efforts to create or recreate certain types of content or other information. Many of these early analog efforts had severe limitations, though, so for everything from computer files to audio and beyond, technologies were developed to create, edit, and manipulate this kind of data in digital form.

For the last few decades, we’ve seen the evolution of digital files and the enormous benefits in organization, analysis, and creation that going digital has provided. Now, however, we’re starting to see the limits even that digital technologies can bring for areas such as entertainment content and certain types of information. It’s hard to really see how adding extra digital bits to audio, photo, and video can provide much in the way of real-world benefits, for example.

Along this path of technological development, many people have also noticed, or more precisely missed, the kind of physical interaction that human beings innately crave as part of their basic existence. The end result has been the rediscovery and/or rebirth of older analog technologies that provide some kind of tactile physical experience that a purely digital world had started to remove.

The best example is probably the case of vinyl records and turntables, which have seen a resurgence of interest even among Gen Z teens and millennials over the last several years. As someone old enough to have an original collection of vinyl, I should be able to remember and appreciate the potential of an analog audio experience. With decades of digital onslaught, though, it’s easy to forget how good the audio quality on a decent turntable and sound system can be. It took a recent experience of someone spinning vinyl at an event I attended to remind me how good it could still sound.

There’s also been a turnaround in, of all things, printed books. Following years of prognostications about the death of print, just this week there was also news that ebook readers and ebook sales were on the decline, while printed books were actually starting to see increases again. Admittedly, an enormous amount of ground was lost here, but it’s fascinating to see that more and more people want to enjoy the analog physical experience that reading a paper book provides them.

Even beyond these examples, there’s still an enormous amount of value that people put into the touch, feel, and experience of using digital devices. The way a device feels in your hand, how the keyboard touch on a laptop feels as you type, all still matters. Looking forward, advancements in both virtually reality (VR) and augmented reality (AR) are going to become highly dependent on some type of tactile, touch-based feedback in order to improve the “reality” of the experience they offer. Recently, we’ve also seen huge popularity towards some older “analog-style” vintage game consoles.

Musicians have always obsessed over the feel and touch of particular instruments and as our digital devices become the common instruments of our age, there’s something to be said for the quality of the tactile experience they can provide. Plus, in the case of musical instruments, one of the biggest trends over the last several years has been the tremendous refound popularity in knob-based, physically controlled analog synthesizers.

Of course, above and beyond devices, there’s the whole debate of returning more of our personal interactions back to analog form. After overdosing on purely digital interactions, there’s growing interest and enthusiasm for cutting back on our digital time and focusing more on person-to-person analog interactions among people of all ages.

Obviously, we’re not going to be re-entering an era of analog technology, as fun and nostalgic as that might be. But as digital technology evolves, it makes sense for technology-based products and experiences to try to recapture some of the uniquely tactile characteristics, feel, and value that only comes from analog.

Podcast: Tech Earnings From Alphabet, Microsoft, Intel, Amazon and Samsung

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss this week’s earnings announcements from a number of the tech industry’s biggest players and analyze what they mean for the future of several key tech products and trends.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast