The Future of Computing is Invisible

I’ve started to see where the future of computing is headed and, paradoxically, it’s invisible.

I say this because I’ve attended and read about several interesting events this past week that have forced me to put some serious thought into where computing is headed. From nVidia’s GPU Conference in San Jose, to the HSA (Heterogeneous System Architecture) Foundation’s release of its 1.0 spec, to even Microsoft’s unveiling of its Windows Hello biometric authentication support for Windows 10, this has been a fascinating week for thinking about where more advanced computing-based applications are going.

The theme that’s tying all these elements together is what I’m calling “invisible computing”, because the end result isn’t something that you can see or even directly engage with. If you think about most types of computing efforts—running an application, playing a game, watching a video, looking up data—there’s usually some kind of visual component that you’re presented with, either as the end result, or as a key element of the process.

The kinds of invisible computing that I’ve been hearing about this week, however, are interesting applications that work behind is being the scenes and only indirectly provide any kind of visual feedback. For example, nVidia’s CEO Jen-Hsun Huang spent a great deal of his keynote speech at the company’s GPU Conference on “deep learning.” The idea is that for applications like computer vision or autonomous driving, a great deal of behind-the-scenes computing, based on sensors and other real-world sources, is going on. These systems “learn” by crunching through that data input, and then they are better equipped to provide more accurate readings as new data comes along.[pullquote]The theme that’s tying together several news stories from the past week is what I’m calling “invisible computing”, because the end result isn’t something that you can see or even directly engage with.”[/pullquote]

In the case of computer vision applications, the magic is in the ability to identify elements in an image without any kind of human interaction. In fact, within the past month or so, computer vision systems have finally surpassed the 95% accuracy rate as compared to specially trained humans. The way the system works is that a multi-stage neural network—that is, a series of mathematical algorithms designed to analyze images—run through a very compute-intensive set of equations that allows a computer to determine that, yes, the image on the screen is a rabbit (or a dog, or a Porsche sports car, or whatever a human brain clearly recognizes it to be). Despite the fact that this is clearly a visual application, the “response” is simply text that’s automatically inserted under an existing image.

Computer vision can be applied to an even more interesting and more invisible application: autonomous driving. Huang and Tesla CEO Elon Musk discussed the “inevitable” future of self-driving cars and the fact that the computing will be done within the car (the “client device” in this case), not in the cloud. The concept of Advanced Driver Assistance Systems (ADAS) is that a car’s motor systems can respond automatically to the visual signs that an embedded computer vision system would send it, keeping cars from getting into accidents by driving automatically.

Arguably, this kind of behind-the-scenes effort is what’s driving big data and analytics trends in servers and big corporate data centers. What’s different now is that this kind of invisible computing is moving more towards client devices. We’re seeing efforts to bring the kind of analytics and intensive data-crunching currently running on servers onto machines (or cars) that we all actually use.

The HSA Foundation is an industry organization devoted to developing standards for the general programming of CPUs and GPUs for a variety of applications. Originally founded by AMD, the organization has been working on specifications for several years, and its timing happened to nicely coincide with nVidia’s GPU news. Like nVidia, many of the more interesting applications that the HSA hopes to enable are advanced data analysis capabilities, which, in certain cases, will be done invisibly to the user.

In the case of Microsoft’s biometric login support for Windows 10 devices, it’s about taking away the need to type in an easily forgotten (or stolen) password. Instead you securely log in to not only your machine, but also the applications and services you use on it by merely using your physical presence and perhaps a gesture like swiping your finger. This is a potentially huge improvement in usability (as I wrote about in one of my 2015 predictions columns) and is a great example of how “invisible computing” can make our experience of using devices much better.

Traditional visual computing will obviously continue to be the primary modus operandi of all the computing devices we use every day—and particularly for the GPUs at the heart of nVidia’s, AMD’s, and the HSA Foundation’s advancements. After decades of enormous progress, however, it increasingly appears that the more interesting compute applications will start to turn invisible and simply improve our experience (and possibly safety) with using our growing range of connected devices.

Published by

Bob O'Donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

2 thoughts on “The Future of Computing is Invisible”

  1. I’ve always wanted to be invisible. And I’ll go along with anything that will make Windows disappear. Rimshot. Thunderous applause.

Leave a Reply

Your email address will not be published. Required fields are marked *