Second Gen HoloLens Provides Insights into Edge Computing Models
After shocking the tech world a bit over four years ago with the debut of the first HoloLens, Microsoft announced its second generation device at a special press event in Barcelona, just before the start of MWC.
The HoloLens 2 is a Qualcomm Snapdragon 850-powered augmented reality (AR) headset running a version of Windows on Arm that offers a number of significant improvements over the first-generation product. Expected to be available later in the year for a price of $3,500, HoloLens 2 features a greater than 2x increase in the field of view that you can see through the display, and what the company claims is a 3x increase in comfort. While the latter point is certainly hard to quantify, I can report through my own experience with it that the device is lighter, easier to setup and adjust, and offers a number of ergonomic improvements that should make it easier and more comfortable to wear for longer periods of time.
The display improvements in HoloLens 2 are the most notable versus the v1 headset. The company was careful to maintain the 47 pixels per degree resolution from the original, allowing you to easily read text inside the display—something that isn’t possible on many other AR and VR headsets. The overall size of the image you can see in the second-generation device, however, is significantly larger than it is in the first edition. According to Microsoft, the change is equivalent to moving from a 720p image per eye to a 2K image per eye. In real-world usage, it’s significantly more compelling and makes it possible to use the device for a wider range of applications.
Another critical hardware enhancement in HoloLens 2 is better hand and object tracking, which allows interactions with the holograms that the display generates to be much easier. Touching, moving, and manipulating holograms is more intuitive with the new headset and the interplay between real-world objects and digitally-generated elements provides a more compelling overall experience.
In addition to the hardware enhancements, Microsoft made a number of important software and standards-related announcements to go along with HoloLens 2. As expected, Microsoft is working with a wider range of partners for business-focused applications for the HoloLens 2—which is where the device is still targeted—in industries like manufacturing, medical, field services, and much more. Microsoft also announced several of its own software tools for the device. Microsoft Dynamics Guides 365 offers companies the ability to easily create training materials that can run on HoloLens 2, while Dynamics 365 Remote Assist lets you view content originally designed for HoloLens on a ARCore-equipped Android mobile devices and ARKit-equipped iOS devices. This work to bridge across standards and platforms is going to be very important over time, so it’s great to see Microsoft working towards breaking down barriers across different AR platforms.
One top of these applications, Microsoft also showed a set of Azure-based cloud services that work along with HoloLens. One of the most compelling is a remote rendering service that lets you generate 3D images of a certain resolution using only the hardware built into the headset, but then allows you to leverage cloud-based computing resources equipped with higher-power GPUs to render a much more detailed version, and then send it down to the display. For applications, where fast, real-time interactions with the model aren’t required, this essentially gives you extremely high-resolution holograms on the HoloLens 2. While there will need to be some software work done to develop these kinds of distributed, heterogenous computing applications, the end result is very compelling.
All told, the collective capabilities of these new Microsoft software tools and services point to a more comprehensive perspective that ties together AR, cloud computing, edge computing and distributed computing in a very interesting way. While the industry has talked about all these different phenomena and the potential connections between them, most of the discussions have been theoretical in nature. These Azure HoloLens services take those theories and make them real, providing a fascinating perspective both on how edge computing applications can leverage different combinations of computing resources and treat them as a single entity, as well as offering some insight into where heterogenous computing applications are headed.
In fact, while some were questioning why Microsoft would choose to launch HoloLens 2 at a mobile industry show, the connectivity story that underlies the HoloLens 2 link to Azure and other cloud services is exactly why the setting was appropriate in my mind. Moving forward, we’re going to continue to see the use of distributed computing resources of various types leveraging mobile and other wireless networks in order to create compelling and meaningful applications. The example that Microsoft provided with HoloLens 2 and Azure offers a small glimpse into that future. (On a side note, it also arguably highlights that Microsoft should have included an option for an integrated cellular modem for HoloLens 2 to be able to offer connections in environments where WiFi isn’t readily available—but their new hardware partner program for HoloLens 2 may alleviate that concern.)
The market for AR, VR and mixed reality devices in business continues to provide promise, but real growth has still been modest. The new capabilities in HoloLens 2 can’t single-handedly fix these issues, but it’s clear that it’s taking an important step in the right direction.