NVIDIA Uses AI to Bolster Professional Graphics

NVIDIA continues to bolster its position in the market with an emphasis on machine learning and artificial intelligence in addition to its leadership positions in graphics for mobile, consumer, and professional segments. At SIGGRAPH this week in Los Angeles, NVIDIA announced several new projects that aim to implement an AI angle to graphics specific tasks and workloads, showing the value of AI across a wide spectrum of workflows as well as the company’s leadership position for it.

The most exciting AI announcement came in the form of an update to the OptiX SDK that implements a denoising capability accelerated by AI with a ray tracing engine. Ray tracing has the capability to create highly realistic imagery but comes a high computational cost that forces renders to take minutes or even hours to create complex scenes in their entirety. When these images are in a partially computed state, they can appear to be noisy photographs, with speckled artifacts similar to what you see with photos taken in extremely low light.

NVIDIA and university researchers use deep learning and GPU computing to predict the final output images from those partly finished results in a fraction of the time. The AI model is created using many “known good images” that require time up front but then allow creators and artists the ability to move around the world, changing view angles and framing the shot, at nearly one tenth the speed. The result is a near real-time interactive capability with a high-quality ray traced the image to accelerate the artist’s capability and vision.

Facial animation is one the most difficult areas of graphics production. NVIDIA has found a way to utilize deep learning neural networks to improve the efficiency and quality of facial animations while saving creators hours of time. Instead of manually touching up live-action actors’ footage in a labor-intensive task, researchers were able to train the network for facial animations using only the actors’ footage in a matter of five minutes.

NVIDIA also implemented the ability to generate realistic facial animation from the resulting data with only audio. This tool will allow game creators to implement more characters and NPCs with realistic avatars in multiple languages. Remedy Entertainment, makers of the game Quantum Break, helped NVIDIA with the implementation and claim it can cut down on as much as 80% of the work required for large scale projects.

Anti-aliasing is a very common graphics technique to reduce the jagged edges on polygon models. NVIDIA researchers have also found a way to utilize a deep neural network to recognize the artifacts and replace them with smooth, color correct pixels.

Finally, NVIDIA adapted ray tracing with AI as well, using a reinforced learning technique to adjust the ray paths to those that are considered “useful.” Traces that are more likely to connect lights to virtual cameras (the view port) are given priority as they will contribute to the final image. Wasted traces that go to portions of the geometry that are blocked or unseen by the camera can be removed before the computation is done, lessening the workload and improving performance.

These four examples of AI being used to accelerate graphics workloads show us that the same GPUs used to render games to your screen can be harnessed uniquely to accelerate game and film creators. Requiring fewer man hours and resources for any part of the creation pipeline means developers can spend more time building richer environments and experiences for the audience. These examples are indicative of the impact that AI and deep learning will have on any number of markets and workflows, touching on much more than typical machine learning scenarios. NVIDIA paved the way to GPU computing with CUDA, and it continues to show why its investment in artificial intelligence will pay off.

Published by

Ryan Shrout

Ryan is the founder and lead analyst at Shrout Research, consulting and advising leaders in the mobile, graphics, processors and platforms. With more than 17 years of experience evaluating and analyzing hardware and technology as the owner of PC Perspective, Ryan has a breadth of knowledge in nearly all fields of hardware including CPUs, GPUs, SoC design, memory systems, storage, graphics, displays and their integration into smartphones, laptops, PCs and VR headsets. Ryan has worked with nearly every major technology giant and their product management teams including Intel, Qualcomm, AMD, NVIDIA, MediaTek, Dell, Lenovo, Huawei, HTC, Samsung, ASUS, Oculus, Microsoft and Adobe. With a focus on in-depth and real-world testing and with nearly two decades of hands-on experience, he focuses Shrout Research on bringing valuable insight on competitive analysis, consumer product expectations and real-world experience comparisons.

3 thoughts on “NVIDIA Uses AI to Bolster Professional Graphics”

Leave a Reply

Your email address will not be published. Required fields are marked *