Modern machine learning-based upscalers can sometimes work wonders, and YouTuber 2kliksphilip decided to test just how far this technology could be pushed. He took NVIDIA’s DLSS and began experimenting by drastically reducing the rendering resolution while maintaining a 4K output. For context, using DLSS in Ultra Performance mode on a 4K screen means the game is natively rendered at just 720p. However, the experiment went much further, starting with a resolution of just 38 x 22 pixels. That’s less than 900 pixels total, or a mere 0.01% of the pixels in a 4K image.
Naturally, one shouldn’t expect a high-quality image from such an extreme starting point. However, it’s noteworthy that the final image was coherent enough to distinguish main objects, separate them, and even make out some larger details. In effect, DLSS was increasing the resolution by a factor of 10,000. As the rendering resolution was increased to 136 x 76 pixels or 160 x 96 pixels, the image remained quite poor, but significantly more detail emerged. It’s conceivable that one could even play a game in this mode, provided it didn’t require spotting very small details.
What’s truly surprising is that at a resolution of 401 x 226 pixels-where DLSS increases the pixel count by nearly 100 times-the image quality becomes sufficient to play through a game without major issues. While the graphics might appear a couple of generations older, many people still enjoy classic games with similar visual fidelity. Finally, when the resolution was raised to 764 x 430 pixels, the picture became sharp enough that, from a certain distance, it would be difficult to tell it had been upscaled 25 times.
This impressive feat is not simple upscaling. Modern versions of NVIDIA’s Deep Learning Super Sampling (DLSS) are a suite of AI-powered technologies. The core component, Super Resolution, uses a neural network trained on a supercomputer to reconstruct a high-resolution image from a lower-resolution input by analyzing motion data and previous frames. More recent additions include Frame Generation, which creates entirely new frames between rendered ones to boost FPS, and Ray Reconstruction, an AI model that specifically improves the quality of ray-traced lighting by replacing traditional denoisers.
While DLSS is often seen as the quality leader, it faces stiff competition. Its main rivals are AMD’s FidelityFX Super Resolution (FSR) and Intel’s Xe Super Sampling (XeSS).
Both FSR and XeSS have also introduced their own forms of frame generation to compete with NVIDIA’s offering.
Experiments like this do more than just test technological limits; they point toward the future of real-time graphics. With over 80% of GeForce RTX users enabling DLSS, the technology has become a cornerstone of modern gaming. The ability to reconstruct a detailed image from minimal data allows developers to implement graphically intensive features like full ray tracing (path tracing) while maintaining playable frame rates. Looking ahead, the role of AI in gaming is set to expand dramatically. Experts predict a future with “neural rendering,” where AI could generate textures, assets, and animations in real-time, leading to more dynamic and immersive game worlds. Stress tests that push upscalers to their absolute breaking point are previews of a future where the line between rendered and AI-generated graphics becomes increasingly blurred.
In a striking illustration of the soaring value of high-end technology, a thief in South…
A New Chapter in a Shadowy SagaChina's reusable spaceplane, "Shenlong" or "Divine Dragon," has once…
Apple has announced that its manufacturing partner, Foxconn, will begin assembling certain Mac mini computers…
After a brief slowdown for the Chinese New Year celebrations, Xiaomi's rollout of its HyperOS…
A recent photo leak by blogger Sahil Karoul has sparked a debate in the tech…
In the wake of the Lunar New Year festivities, the smartphone market is stirring with…