Why is Virtual Reality after so many years and the progress of the PC still something residual? Over the past few years, we’ve seen how NVIDIA and AMD have been steadily tiptoeing away from this support and not even mentioning it in their marketing. Is there a clear intention to sabotage virtual reality, or is it more technical limitations?
The high technical requirements of Virtual Reality
It is necessary to start from the fact that virtual reality is based on completely different rules from those of the generation of graphics on a classic screen, mainly from the fact that two images are simultaneously generated, one for each eye, which although they are presented in the same LCD panel is actually two separate display lists. That is to say that VR, without optimizations, forces the CPU to have to work twice as much per frame as in a classic situation, but this is not the only element to take into account.
Because for immersion the latency should be as low as possible, and by this we refer to the whole process, from the moment the player performs an action until the consequence of it be seen before our eyes. Not only the time it takes for the CPU and GPU to generate a frame, but also the time before interpreting said actions and the time after to send it to the screen. This is called photon motion latency, if it is more than 20 milliseconds, it can cause dizziness and nausea. The ideal? 5 milliseconds for augmented reality applications and 10 milliseconds for virtual reality.
The other problem is that since the screen is so close to our eyes, the pixel density and therefore the resolution must be very high, if we don’t want to see the grainy image or the gaps between them. This further increases the technical requirements in terms of power.
Why do we say that there is sabotage of VR?
The first of the reasons has to do with USB-C connectivity. One of the special features of the PS5 is that its front USB-C port is switched so that it can receive the signal from the display controller built into its main chip, this means the PSVR 2 has a direct connection to the GPU. On the other hand, on PC things are more complicated.
- If we talk about laptops, we will see that it is only recently that they have started to consider that video outputs beyond that used by the main screen have the same latency.
- On the desktop, it’s more complicated, because the frame buffer must be transmitted via PCI Express to the RAM where it must be copied and from there it is sent to the USB-C output of the motherboard. This adds additional latency of several milliseconds which does not make it ideal.
- The best solution? If the processor has an integrated USB controller, usually via a PCIe to USB 3.1 or higher bridge, data can be sent directly. The problem is that Intel never puts a USB controller in its processors, but AMD does.
The way to solve it? Well, forcing the graphics card and CPU into full throttle to output the graphics will alleviate this issue.
DLSS and FSR are part of VR sabotage
Now, many of you will think that the solution may come from NVIDIA and AMD’s resolution scaling techniques that result in more frames being generated. Well, the problem comes from the fact that they add a fixed latency to do their job, which can be counterproductive. We have to keep in mind that one thing is the speed at which the images are generated and another is the latency of motion to the photon.
And that’s something that many of you will experience when using FSR3 or DLSS3 and the frame interpolation that both have. Take a 60 FPS game interpolated to 120 FPS and then another native to 120 FPS and believe us you will notice how much higher the response time in the first case will feel. This breaks the immersion in virtual reality and its inclusion and promotion can be seen in a way as a sabotage of virtual reality.
The solution that got lost along the way
In the midst of the boom of faulty 3D screens, a little over a decade ago, in the era of DirectX 10, the company Crytek, creator of the popular Crysis, imagined nothing other than to launch a solution allowing to a normal image to be converted into two stereo images and for very little. Well, what it did was that the processor didn’t have to compute two images, and the geometry of both didn’t have to be computed, just one.
The technique I was doing was to use the already turned off DX10 geometry shaders to generate a modified and camera-shifted version to have both images. It didn’t save the cost of pixel coloring, the most expensive part of the process of generating a 3D image, but it did help a bit and speed it up. The problem? With DX 11 geometry shaders are gone and so far neither NVIDIA nor AMD have implemented this mechanism in the raster unit of their GPUs.
The real reason, video memory
It is not that there is a conscious sabotage of virtual reality, but rather that we must start from the fact that virtual reality not only requires a very high refresh rate, but also an extremely high resolution. Other solutions such as Ray Tracing, despite its lack of popularity among users, you can at least sell them for the beauty, but VR costs a lot more to sell. The fact is that :
- Increasing the resolution has an impact on the video memory bandwidth.
- Increase the frame rate in the same way.
What is going on? That the biggest limitation of graphics cards today is video memory speed, the reason there are solutions like FSR and DLSS is really for this reason. In any case, we have already said that they are counterproductive for virtual reality.
Table of Contents