Game graphics could soon look as real as this teapot, thanks to Nvidia

With its new Real-Time Neural Appearance Models, Nvidia is using AI to reduce the render work of high-end, movie-quality ray-traced visuals.

by · PCGamesN

Nvidia has just announced an AI-enhanced computer graphics technology that it claims can perform ultra-realistic rendering in real time, meaning it could bring move-quality visuals to games. This new Nvidia Real-Time Neural Appearance Models tech can boost rendering performance by between 12x to 24x compared to the standard method.

The new technology will still require some of the more powerful Nvidia options on our best graphics card guide to provide close to real-time rendering, but it’s potentially a big step towards a new level of PC gaming graphics.

The core of the technique is that it takes the traditional means of rendering a highly complex model and replaces it with a neural network. Specifically, for normal rendering, a model is defined by a set of rendering steps called a shading graph, with features that might include multiple applications of different types of geometry formation, multiple surface and sub-surface textures, lighting techniques, and more.

With its new neural materials, Nvidia interprets these input textures and rendering procedures, and produces a neural network that can simulate the output of these steps in a much faster time period.

The crux of the technique appears to be similar to the generative latent textured objects technique detailed in the video below. In essence, it replaces the multiple fixed steps of the shading graph technique with singular neural textures, which incorporate several of the key pieces of information that would normally be found in multiple steps into a single texture.

Regardless of exactly how it works, Nvidia’s performance claims are certainly impressive. The company claims this new tech results in a minimum performance increase of 12x, with up to 24x faster rendering being possible. That’s a huge leap whichever way you look at it.

What’s more, the results really do look amazing. Nvidia proclaims that the technique “opens up the door for using film-quality visuals in real-time applications such as games and live previews,” and at least in terms of visual quality, we can’t argue .

The neural materials version of the test render scenes Nvidia provides are all but indistinguishable from the traditionally rendered versions. The model is also scalable so users can opt for different levels of detail depending on their needs.

There are a few obvious drawbacks, though, particularly when it comes to gaming. The first is that this technology is specifically aimed at making really high-end visuals, and although it’s much faster than traditional rendering, this level of detail is still slightly outside the scope of what’s currently sensible to use for games. It’s all well and good having a stunning-looking single teapot in a scene, but you also have to render the entire rest of the scene.

The second factor is that the focus here is on ray-traced imaging, so the potential gains are only really for games that are already pushing the limits of how fast they can run. It’s also notable that the performance uplift here concerns how fast this particular step of the process runs, not how fast the scene renders as a whole. Ray tracing is still hugely computationally expensive, even with cleverly rendered, high-detail objects.

The final factor is that, like DLSS, this is a proprietary technique that takes advantage of Nvidia’s Tensor cores, rather than it being an open system that’s available for use on any neural processing core. It’s as yet unclear how feasible it might be for the likes of AMD to once again engineer an openly available equivalent – as it has done with FSR – but here’s hoping it’s a possibility.

For more on Nvidia’s previous AI-enhanced graphical innovations, check out our Nvidia DLSS guide, or find out how AMD’s competing FSR technologies have taken on Nvidia’s proprietary upscaling and frame generation systems.