Given all the focus recently on mobile computing and mobile devices, it’s easy to forget that many computing problems and many important tasks are still best suited for a larger computing canvas. Not just a large display—although that is certainly a key element—but a larger, more powerful compute engine as well.
Computer workstations have traditionally been the devices that offer this more powerful computing experience. Workstations get the highest performing CPUs and graphics engines, the fastest storage and memory subsystems, and they’re linked with the largest, highest-resolution displays. As a result, workstations continue to play an important role in the overall computing ecosystem, and though the absolute numbers remain small, they are still a growing category.
But if you look back over a decade or so, the kinds of things that were once relegated to workstations are now being done on personal computers. High resolution image editing, 3D modeling, in-depth data analysis and more are actually at the heart of many relatively mainstream PC applications, including photo editing of HDR (High Dynamic Range) and other high-quality images coming from high-resolution cameras (which, ironically, are sometimes found on mobile devices). In addition, rendering the high-quality, real-time graphics necessary for today’s PC games is another task previously relegated only to workstations.
Two other key areas that are just coming into their own are data analytics and true life-quality communications and collaboration tools. Given the enormous amount of “big data” sources that are becoming available to people in all walks of life—from website traffic trends to sensor-driven data sources—the need to be able to visualize, analyze and otherwise work with this data is going to drive new demands around a bigger computing experience.
Similarly, there’s growing interest in better tools to communicate and collaborate with co-workers or partners around the company or even the world. There have been some noticeable improvements in traditional videoconferencing tools over the last few years, but we still don’t have great real-time collaboration tools that combine high-resolution audio and video with real-time file editing, “whiteboarding” and other types of teamwork-focused capabilities.[pullquote]Teracomputing is about turning your entire desk or workspace (your personal “terra”) into an interactive display that you can manipulate with both hands simultaneously.”[/pullquote]
Both of these types of applications, as well as things like design, architecture, or any other kind of brainstorming or creative effort, are headed towards a more intensive type of computing experience that I’m calling teracomputing. At the core of the teracomputing concept is a very large, high-resolution touch-screen display (think “Minority Report”) and more intuitive ways to interact with the content (or people) shown on that display. In the case of data visualization or design, it’s about the ability to work at a pace and a scale that even current large-screen monitors don’t allow. This is about turning your entire desk or workspace (your personal “terra”) into an interactive display that you can manipulate with both hands simultaneously. The computer industry has talked about the metaphor of a digital canvas in the past, but in a teracomputing world, you would literally have a digital canvas on which to create and work.
In the case of collaboration, the model would be different as the device at the heart of the experience would likely sit in a meeting room and be viewed simultaneously by multiple people. But conceptually, the idea of breaking through traditional barriers by working at a visual (and resolution) scale that’s well beyond what we have today makes this another type of teracomputing. The goal here, of course, would be to re-envision how people hold collaborative meetings. You could achieve this by being able to clearly see and hear other meeting members as if they were in the room, and communally work on digital whiteboards, shared files or whatever the focus of the meeting happened to be.
Of course, to achieve any type of teracomputing model, it’s going to take improvements in both hardware and software beyond what’s commonly available today. Software, in particular, needs additional efforts to leverage not just multi-finger but multi-hand touch and gesture support. We’ll also likely need some refinements to existing user interface design. Once some of these efforts are made, however, it’s not difficult to imagine people starting to get reinvigorated about computing on a grander scale.