Nvidia
Nvidia Has Released The RTX Video Super Resolution Web Video Upscaler
In January, Nvidia announced a new feature for its graphics cards: the RTX Video Super Resolution feature. It applies to upscaling and post-processing of video played in Chrome-based web browsers. Nvidia uses AI upscaling with temporal stabilization in games under the DLSS designation, so applying it to web video is a logical extension.
The current GeForce graphics drivers have enabled this feature, and the first comparisons and feedback are coming in. Nvidia released a new driver, version 531.18, on Feb. 28, and one of the new features in this update is just RTX Video Super Resolution.
According to the original announcement, it was supposed to be available in February, so it was a last-minute release. The feature needs browser support simultaneously, and for now, it’s available in Google Chrome and Microsoft Edge if you have the current version of those.
Once your drivers and browser are updated, you can use upscaling via RTX Video Super Resolution on sites like YouTube, Twitch, Hulu, and Netflix. According to Nvidia’s documents, a different (new) neural network is used than what’s in DLSS, and training is done on different data.
RTX Video Super Resolution works purely with image data. Unlike DLSS, it doesn’t have “meta” aids like motion vectors and depth information from the game engine. On the one hand, the filter detects and enhances edges; on the other hand, it also tries to remove compression artefacts.
Nvidia does not indicate that it has a temporal dimension like DLSS 2.x — that is, the ability to combine detail from multiple consecutive frames.. It is, therefore, purely spatial, like DLSS 1.
According to the slide describing how it works, the input image is first scaled to the target resolution by a common bicubic algorithm. Then the AI generates changes to improve contrast and edge sharpness on that image. Temporal reconstruction does not feature anywhere.
HDR video is not supported
The filter supports input videos with resolutions from 360p to 1440p. However, HDR videos, YouTube Shorts, and some videos with copy protection (DRM) are not supported. However, support for HDR could eventually be added.
It may require adjustments and other tweaks, but it is possible. Upscaling works in a window or within a web page, as well as full screen, and is activated when playing at any resolution higher than the input resolution. When playing at native resolution, RTX Video Super Resolution is inactive i.e. cannot be used to remove compression artefacts.
Nor does it support downscaling. Upscaling is disabled when the browser window is in the background or minimized. And also when the video is paused. This saves electricity, which RTX Video Super Resolution consumes quite a bit.
According to tests, the involvement of tensor cores can significantly increase the power consumption against the level usual when traditionally playing video. ComputerBase came up with tens of watts more power consumption on the RTX 3050 card. On the other hand, Tom’s Hardware again reports very low power increases.
What do you need?
You need to enable this feature in the driver settings in the Nvidia Control Panel. Go to the video playback settings tab (Adjust Video Image Settings), where the new feature appears under “RTX Video Enhancement”.
Here you check “Super Resolution” to activate upscaling. It’s possible to adjust the quality, with the default setting of 1 expected to work on all RTX 3000 and RTX 4000 generation graphics, according to Nvidia. Higher levels need more processing power.
Nvidia states that with quality 4, playing “most content” on GeForce RTX 3070/4070 class cards should be possible. By most, they probably mean that the GPUs can handle upscaling to some combination of input resolution and frame rate — upscaling at 30fps is less demanding than for some 120fps video.
RTX Video Super Resolution uses tensor cores and is only available on GeForce RTX 3000 or RTX 4000 cards. Eventually, support for the GeForce RTX 2000 is supposedly coming as well, but those will reportedly need algorithm adjustments and won’t get the feature until later.
A specific date has yet to be communicated. By the way, Nvidia probably doesn’t support RTX Video on professional graphics yet, only on GeForce models.
On laptops, watch out for the flashlight
On laptops, you still need to select Windows graphics settings to use high performance for the web browser so that Optimus technology will turn on standalone graphics when the browser is opened. Unfortunately, if you open Chrome/Edge (even without video playback), your power consumption will increase, and battery life will be reduced.
In this case, Nvidia recommends having both Chrome and Edge on the PC and using one of them for video playback (and only setting the high graphics performance mode with dedicated GPU for that one) and the other one for other work, with which the laptop won’t “eat”.
First visual tests
Of course, the most important thing for video playback is the result. Several sites, such as Tom’s Hardware, TechPowerUp and ComputerBase, have looked at the quality achieved by this feature. It will probably be a matter of taste as well.
Still, it seems from them that the benefit is rather minor or questionable (and for higher input resolutions, it’s probably a question of whether it’s worth it). AI probably can’t achieve such improvements without working with the game engine. In particular, though, the temporal reconstruction, perhaps DLSS 2.x’s main trump card, is probably missing.
Thanks to it, it can get pretty good quality out of low internal resolution, but RTX Video can’t do it to that extent. On the other hand, like any post-processing, RTX Video can be a double-edged weapon. For example, it has been pointed out that the filter can suffer from a typical “AI” problem commonly seen in smartphone photos — weird, unnatural rendering of bushes and trees.
Also, an oil painting effect may be created by blurring (when removing compression artefacts) and sharpening. This effect used to be seen in DLSS 1.0 (it was pronounced in the Anthem implementation). Interestingly, Nvidia’s presentation of this feature shows a lot of playbacks of captured gameplay videos instead of some of the more usual movies/series or “live” content shot with a camera or animated.
It’s that kind of content that Super Resolution doesn’t seem to bring much improvement on, whereas it might look better on game footage — that kind of image is probably more suited to and less hurt by the edge-focusing method that’s used.
With further development and training, the rendering of real video could be improved. But upscaling is always a struggle with reconstructing information that is no longer available, so perfect results cannot be expected.