DLSS has been a debated feature for many years, now. In some scenarios, RTX graphics cards that feature DLSS 1 and DLSS 2 or the ground-breaking DLSS 3, can perform incredibly well and enhance a gaming experience, improving framerates and visuals. Though in some use cases, the benefits might be negligible. So, whilst a large proportion of PC gamers can benefit massively from the revolutionary technology, some may never see these benefits at all. It is really important for consumers to understand the tech - and more importantly - if, how and when they will use it.
What is DLSS?
Deep learning super sampling (DLSS) technology was essentially developed by NVIDIA to solve a variety of problems in anti-aliasing, which we'll go into in the following sections.
DLSS solves the trade-off between image quality and system performance. In most cases, increasing the resolution and quality of visuals in a game (using graphics settings) is resource-intensive, usually leaning heavily on both the CPU and GPU; this can often lead to a decrease in performance. You might be familiar with terms like frame drops or lag, where the GPU is unable to deliver the number of frames in a game due to lack of resources. DLSS aims to solve this problem by using artificial intelligence to increase the resolution of the image.
The DL in DLSS is the critical part. Deep Learning requires developers to use NVIDIA's proprietary tools like Streamline to train AI for specific games, or using Unreal Engine plugins and the native support in Unity. This offers a relatively easy to use plug-and-play framework for developers, identifying motion vectors, depth and other parameters are required for super-resolution.
Naturally, this training aspect is paramount for the whole system to work, because when you purchase a new AAA title, the developers have already done all of the heavy lifting away from your PC. All that is left is for the Tensor Cores to work with data already captured, and calculate the super sampling whilst you are playing, based on the training provided by the developers.
By leveraging the power of NVIDIA's neural Tensor Core technology that is at the heart of their RTX graphics cards, this artificial intelligence and machine learning drives NVIDIA's DLSS technology, enhancing the performance of your gaming PC through upscaling, often with an increase in framerate.
DLSS in practice
So, how does this work in practice? Well, if you imagine a first-person shooter game, it is likely that the developer will only want certain aspects of the game running through the DLSS engine. Whilst the scenery and characters are certainly going to need upscaling, less graphically intensive aspects like the HUD, main map, menus and mini-map can be left to the GPU to render with minimal overhead. This is all part of the graphics pipeline: how and when the aspects of the game are rendered on-screen. Performing this work in the development stage means less GPU-intensive work for the gamer.
It is important to note that NVIDIA makes this cross-IHV (Independent Hardware Vendor) framework available to competing vendors and technologies such as Intel's XeSS and AMD's FSR, which means any game that uses Streamline for DLSS could be modified quickly and easily by developers for cross-vendor support. There are multiple technology integrations available to developers for DLSS, DLAA, NVIDIA Real-time Denoisers (NRD), Shader Model 5.2 image scaling, and Reflex (NVIDIA's optimiser that provides a smoother and more responsive game feel).
"Instead of manually integrating each SDK, developers simply identify which resources are required for the target super-resolution plug-ins and then set where they want the plug-ins to run in their graphics pipeline," Nvidia explains on the Streamline portal. "Making multiple technologies easier for developers to integrate [..] benefits gamers with more technologies in more games."
What does DLSS do?
Super sampling is a form of anti-aliasing which can smooth the noticeable jagged edges that are visible on rendered graphics. There's little worse in a game to disturb the immersion or atmosphere than jagged edges appearing on an otherwise beautifully rendered scene. Anti-aliasing comes in many forms, and can be confusing for those who don't have a clue what each setting means - which often results in a lot of tweaking and testing to get things just right. DLSS aims to rid gamers of this process, allowing you to flick a switch and benefit from the fastest and most accurate anti-aliasing.
Traditional Anti-aliasing
When we talk about anti-aliasing, there are multiple methods of achieving the same desired result. As we play games, animations are delivered to the monitor display in the form of frames. When you are playing at 60 FPS, you are seeing sixty rendered frames every second, and your preferred anti-aliasing method is working on those 60 frames in milliseconds to quickly to smooth out jagged lines. It can be just as difficult to get your head round this as watching movies like Back to the Future. Anti-aliasing works in the past, present and predicts the future; it aims to deliver frames based on pixel colour approximations or edge detection based on previous frames, and for the most part - it succeeds. Unfortunately, this is not without its drawbacks. Anti-aliasing is costly in terms of GPU processing capabilities, as we'll discover.
Below you can see the result of an anti-aliasing method called SMAA (Subpixel Morphological Anti-Aliasing) in Rise of the Tomb Raider. SMAA uses edge detection and can blur pixels where jagged edges are forming, but will need a lot of GPU resources to take samples along those edges. For example, in a city scene such as you see in GTA V or Watch Dogs, SMAA would absolutely be working very hard to smooth out the multiple harsh edges of buildings, windows and the like. SMAA is a higher quality anti-aliasing result than what you would see from FXAA, but it's a slower process, in these types of scenes, TAA is a superior method that doesn't chew up as many resources.
Without anti-aliasing:
With anti-aliasing:
FXAA (Fast Approximate Anti-Aliasing) uses a high contrast filter to find the edges of a graphic asset and then begins sampling and blending them to produce a smoother appearance.
TAA (Temporal Anti-Aliasing) is a time-based post-processing form of anti-aliasing, taking samples of all pixels of a frame from different areas of the viewport, then blending those past and present pixels. This results in less blurring than FXAA, but can cause ghosting, which is a sample from a previous frame that is displayed in a current one. TAAU (Temporal Anti-Aliasing Upsampling) uses information from previous frames, rendering at a lower resolution (e.g. 720p), then use upsampling to go to higher resolutions (e.g. 1080p).
MSAA (Multisample Anti-Aliasing) looks at only the pixels that might cause an issue, ignoring adjacent pixels that have the exact same colour, and only looking for actual edges to anti-alias. MSAA is less demanding than super sampling, but has a hardware resource problem, in that it taxes your GPU during gameplay.
Super Sampling
SSAA (Super Sampling Anti Aliasing) is a very simple form of anti-aliasing, in that it uses a higher resolution image and then resizes it to the native resolution of your monitor. In this process, colours are averaged near edges, and the downsampled image will have few jagged or harsh lines. With SSAA 4x, one pixel is extrapolated out to four pixels, with samples taken from each of these four pixels and downsampled to a single pixel that will average unique colours.
The biggest issue - though delivering the best result - is the high computational requirement of the sampling of a higher resolution image. SSAA, therefore, is rarely optimal on lower specification gaming PCs.
Traditional anti-aliasing effects that are hardcoded into game settings can offer a satisfying result when it comes to visuals, but when you are trying to play pretty games at high framerates (for smoother gameplay) at high resolutions (for crisper visuals), you need a little more power behind the calculations. DLSS is a form of SSAA, but uses AI to perform these complex calculations. The object of DLSS is to primarily increase performance, and means less demand on the GPU (the actual processing unit itself), handing workload off to Tensor Cores.
What all of this boils down to, in practical terms, is DLSS can make games rendered at 1440p look as if they are rendered at 4K. Similarly, a 1080p game can appear to be rendered at 1440p.
What about DLAA?
DLSS is not to be confused with DLAA, which is a technology for users who have extra GPU processing power available which can be used for higher quality visuals. DLAA is an AI-based anti-aliasing mode and uses the same technology as DLSS, but is applied to the original resolution image in order to maximize image quality rather over improving performance. DLAA is for users who prefer image quality over performance, so will often be used by high-end gamers.
Main features of DLSS 3
Aside from the above anti-aliasing and framerate benefits, DLSS 3 is also augmented with an optical flow frame-generation algorithm to effectively double framerate. In practice this means the AI is creating a frame in between the previous and upcoming frame, based on the data it compiles about pixels from the previous frame. If that sounds complex, it should be easier explained with these three images:
Image credit: Digital Foundry
As you can see, there are some small artefacts in the AI generated frame, but these will be imperceptible during gameplay. This process gives the impression of much more fluid frame transition.
Of course, purists and enthusiasts have lamented that this could spell the end of traditional benchmarking, but one has to remember that this is an optional feature that can be switched off if desired. Practically speaking, it enables developers to increase the framerate and upscale visuals in their games at will - which is an excellent signpost for how gaming will be changing in the next few years.
Which RTX cards feature DLSS?
First and second generation DLSS is available on all RTX cards from the 20-series, to the 30-series and DLSS 3 is available on the RTX 40-series.
DLSS 1 launched in February 2019 as a spatial image upscaler included with RTX graphics cards of the time, and was extensively marketed and reviewed using demanding titles like Battlefield V and Metro Exodus. DLSS 2 launched in April 2020 as an AI accelerated version of TAAU using Tensor Cores, and trained generically. DLSS 3 launched September 2022 and may be available to RTX 30-series cards in future, but for now is exclusive to the RTX 40-series.
Which graphics cards feature DLSS 3?
The RTX 40-series is the flagship product line for DLSS 3, and it would not be surprise if we see even further additions to the technology some time soon. NVIDIA are constantly evolving their deep learning, machine learning and AI toolsets and technologies, working with multiple IHDs and open-source communities.
NVIDIA GeForce RTX 4090
The flagship of the RTX 40-series, the RTX 4090 stands head and shoulders above almost all graphics cards in the market right now, going head to head with AMD's 7000-series, and beating the Radeon RX 7900 XTX overall in independent benchmarks.
DLSS 3 is showcased superbly with the RTX 4090, with great results in framerate increase and overall performance.