Since GDC 2020 has been delayed until at least summer, Intel has made some online presentations on streaming and has created a platform where it has made all of its presentations. Among these we can see the so-called “Multi-Adapter Integrated + Discrete GPUs”, which is currently proof of concept but there Intel intends to upload a dedicated graphics card functionality that can be used in the GPU, improve performance.
The iGPU and dedicated graphics, work together
Using this method means using the asynchronous loadsince they include a dedicated iGPU and GPU unique properties and energy.
For this reason, the idea that Intel proposes to use Direct3D graphic API 12 of DirectX, which would also mean processing by mail, so they solved it by simply having dedicated graphics "allocate" part of the load to the IGPU, and especially in the proof of concept made the GPU take care of the computer shader while a dedicated GPU would do more to render the graphics.
Or in the screenshot above you see the RTX 2080 Ti with the Intel UHD 630, in fact Intel has been able to use the Intel HD 530 with the Radeon RX 480. Intel did not say that the performance was just a GPU or just a graph – that is, it showed only integrated performance – and decided that this approach could be started in two possible ways:
- The first one has a LDA (Connected Display Connector). Here we would need an adapter (D3D device) with multiple locations and resources to copy to all. Intel said this would make it symmetric, meaning that the same GPUs would be used, something that would not happen unless they now started rolling out their IGPUs as dedicated graphics.
- The second is a multiple adapters specific, which is exactly what Intel has shown for example, using shared resources and an asymmetric computer.
Intel also discussed three things that could work for this adapter, most importantly rendering, even though he had stated that he did not work with asymmetric computing. The other thing you can do is achieve sent back to the GPU, but this requires skipping the PCIe bus twice (round trip).
The third option, which seems to be most effective, is for many similar tasks Artificial intelligence, physical calculations, particles composition, shade, etc. According to Intel this is the best choice because it saves the consumer consumer model, where the PCIe bus goes through only once: one produces content consumed by another. In addition, it states that these functions can be completely allocated, that is, as we have explained before certain tasks are transferred to the IGPU while the dedicated GPUs continue to perform the "fat" task.
Now the GPU processor will take on greater importance
It is true that there are many users with a iGPU integrated processor that they do not use, because they instead have a dedicated graphics card, leaving it packed and unused. This idea from Intel could revolutionize the processor market, since those users who intend to buy a processor outside of the GPU because they will use a dedicated image will now have to think twice, if they finally implement the idea.
Of course, before jumping into the pool we have to check that Intel has only shown the proof of concept so far, and hasn't even shown it. how much performance improves transfers part of the GPU load to the GPU, either how does this affect usage.