Bringing Vision to the Edge
In case you haven’t been paying attention recently, one of the hottest topics in the tech industry is the concept of “the edge.” Virtually every major tech company has been talking about their strategic approach to this new method of computing recently, and the announcements are bound to keep coming.
Standalone PCs are an example of edge devices, but when most companies talk about the edge, they really mean things like drones, smart home gadgets, sensor-based industrial devices, autonomous cars, and so on. While these devices can—and often do— connect to the Internet, they can also compute independently, courtesy of built-in x86 processors or Arm-based microcontrollers, and embedded software.
At Microsoft’s Build developer conference in Seattle, the company had a particularly strong focus on what they term the “intelligent edge” and touted it as one of the next major revolutions in computing. The “intelligent” moniker stems from the use of AI, machine learning and other advanced computing concepts in these edge devices, bringing a whole new level of capability—and attention—to them.
As appealing as the concept of intelligent edge computing may be, however, there have been some real challenges in enabling the potential of these new devices. The biggest issue is around creating software to run on the often novel architectures used inside of them. To that end, Microsoft has been making a great deal of effort both on the platform side, as well as the application development side.
Because there are a wide range of different devices with various levels of sophistication and diverse amounts of hardware, Microsoft now has several platform choices, including Windows 10 IoT Core, Windows 10 IoT Enterprise, and Azure IoT Edge Runtime, which the company is now open-sourcing. The Azure Runtime offering, in particular, is ideally suited for the enormous array of smart products appearing everywhere from our homes to hospitals to factories, farms and more.
Technically, the Runtime is collection of programs that runs on top of the various types of Linux, Windows or other embedded operating systems found in these smart devices, hence the name. What it does is create a common platform for which applications can be created. Without the consistent capabilities enabled by the Azure IoT Edge Runtime, developers would have to create new or different versions of their applications for each potential hardware/software combination—an impossible task.
In addition to this common base, Microsoft’s Azure IoT Edge Runtime leverages the same cloud-based computing platform offered by “regular” Azure that the company uses in its cloud computing offerings. This is very important for developers because it means they can use the same development tools and methodologies that they use for cloud-based programming models, including containers, to create software for these intelligent edge devices. This, in turn, makes it much easier for companies to build edge applications without needing people with entirely new types of programming skills.
Microsoft used their Azure platform approach to build and release a set of AI-based computer vision processing tools called Custom Vision that can leverage cameras and Qualcomm-based image processing silicon on edge devices. Part of what Microsoft calls Azure Cognitive Services, Custom Vision essentially brings “eyes” to the edge, letting companies build applications that react to visual information that these smart edge devices see—without needing a connection to the cloud.
A great example of this showed off Microsoft’s new partnership with DJI, the world’s largest drone maker. Using a DJI drone running Custom Vision on Azure Edge IoT Runtime, the two companies showed how you could create an app that would be capable of seeing and visually annotating (in real time) anomalies or other faults on a pipe that was being inspected by a drone. Microsoft is also working with DJI to build a Software Development Kit (SDK) for Windows 10 devices that allows for the creation of flight control and real-time vision or sensor-based applications for DJI drones.
With Qualcomm, Microsoft announced a vision AI developer kit that uses Azure Machine Learning services to create vision-aware applications for edge devices that can be accelerated by Qualcomm silicon and the Qualcomm Vision Intelligence Platform and Qualcomm AI Engine software tools that they have created. In practical terms, this means that developers could create applications such as smart home visual doorbells or security cameras equipped with certain Qualcomm chips to recognize specific objects, or individuals, and react appropriately—such as providing security notifications only if a person isn’t “known” to the household, and so on.
What’s particularly intriguing about this example, is that it’s one of the first of what will likely be many AI edge applications that can take advantage of the unique characteristics of certain types of silicon. Look for many more accelerated AI edge applications to come.
The potential opportunities with intelligent edge computing are already enormous, and that’s why there is now tremendous excitement within the tech industry about how these technologies can be applied. There’s still several hurdles ahead and a fair amount of work involved, but real progress is being made. As Microsoft demonstrated, we’re starting to see some critical strategic steps towards simplification on what can be a very complex topic. As a result, it shouldn’t be long before a much broader range of developers can start leveraging capabilities such as vision on the edge. When they do, the kinds of applications and services we can enjoy are going to be amazing.