The £239/$249 Nvidia GeForce RTX 3050 is the long-awaited desktop equivalent of the RTX 3050 that debuted in laptops last year, bringing a vastly expanded power envelope and twice the VRAM of the mobile version. This should give better performance on the desktop, but how much can Nvidia squeeze out of the GA106? To find out, we paired the new GPU with a high-end PC system and tested it in all of our favorite games.
In general, we want next-generation GPUs to perform in the same range as previous-generation parts at the same tier. So for the RTX 3050, we’d like to see performance in line with the outgoing RTX 2060, which is still a pretty good choice for 1080p to 1440p gaming – and being able to use DLSS for a significant performance boost can often compensate for Additional enable ray tracing induced strain. If the RTX 3050 can bring those features down to a lower price point without compromising performance, then Nvidia would be the winner.
The RTX 3050 socket sits between the two members of AMD’s competing Radeon GPU lineup. It’s $50 more than the RX 6500 XT released a few days ago and $80 less than the RX 6600 released last year. That should mean we’re seeing significantly better performance than the latter, but it’s interesting to watch if the value proposition of any of these three contenders is generally better.
Note that our GPU tests were performed with a PCIe 3.0 system, which is not good for the 6500 XT – it only has a 4x PCIe link, which is fine for PCIe 4.0 boards, but a bit limited for PCIe 3.0 boards, Corresponding performance drop – off in some games. This is something we’ll have to live with since we don’t have time to retest every GPU on a PCIe 4.0 system to mitigate issues that primarily affect one GPU – but it’s also something you should know.
Nvidia GeForce | RTX 3060 Titanium | RTX 3060 | RTX 3050 | GTX 1660 Titanium | GTX 1050 Titanium |
---|---|---|---|---|---|
graphics processor | GA104 | GA106 | GA106 | TU116 | GP107 |
CUDA colors | 4864 | 3584 | 2560 | 1536 | 768 |
video memory | 8GB GDDR6 | 12GB GDDR6 | 8GB GDDR6 | 6GB GDDR6 | 4GB GDDR5 |
memory bus | 256 bits | 192 bits | 128 bits | 192 bits | 128 bits |
bandwidth | 448GB/sec | 360GB/sec | 224GB/sec | 288GB/sec | 112GB/sec |
base clock | 1410MHz | 1320MHz | 1552MHz | 1500MHz | 1291MHz |
boost clock | 1665MHz | 1777MHz | 1777MHz | 1770MHz | 1392MHz |
TDP | 200W | 170W | 130W | 120W | 75W |
Die size | 392mm2 | 276mm2 | 276mm2 | 284mm2 | 200mm2 |
transistor | 17.4B | 13.3B | 13.3B | 6.6B | 4.4B |
Specs-wise, the RTX 3050 has potential. It uses the same GA106 chip as the moderately performing RTX 3060, reduces CUDA cores from 3584 to 2560, and reduces GDDR6 VRAM from 12GB to 8GB. That’s still enough for 1080p or 1440p gaming, despite the narrow 128-bit memory bus. That’s a good thing for a memory bandwidth of 224GB/s, which is even lower than the previous generation GTX 1660 Ti. The boost clock is 1777MHz, the same as the RTX 3060, with a rated TDP of 130W – pretty modest, allowing Nvidia to drop its PSU recommendation to just 550W. The cuts here all look reasonable, except for the memory bandwidth, so it will be interesting to see how the RTX 3050 performs.
Our RTX 3050 graphics card is a Gigabyte Gaming OC model with 8GB of video memory, three Windforce fans, a backplane, copper heatpipes, a dense stack of heatsinks, and RGB lighting—all of which, of course, contribute to the performance. It’s a well-made card and I’d expect a retail price above the £239 MSRP, especially in the current low supply, high demand market. Connectivity is strong, with two DisplayPort 1.4 ports and two HDMI 2.1 ports all on the same row. The rest of the card’s two slot widths are dedicated to cooling. The model is clocked at 1822MHz, while the reference design is clocked at 1777MHz — an increase of 45MHz, and we don’t expect to change our approach to recommending this card.
Given the relatively few power requirements and the large size of the card, it’s perhaps not surprising that we observed cool and quiet operation. Temperatures remained in the mid-60s throughout our testing, allowing opportunistic boosts above the rated boost clock to nearly 2000MHz. Thankfully, the relatively low power consumption eliminates the need for high-end coolers or powering components, and some tweaks in Nvidia’s opportunistic boost algorithm and/or Afterburner should allow base-level models to hit nearly the same speed. Be sure to check out the multiple reviews to check that assumption!
Our test rig is in line with our review of the RX 6500 XT a few days ago. We use a Core i9 10900K system because when the current-gen GPU launches in 2020, it offers great gaming performance with an Asus Maximus 13 Hero Z590 motherboard, dual-channel G.Skill Trident-Z Royal DDR4-3600 CL16 memory and Box’s 2TB Samsung 970 Evo Plus NVMe drive. To keep our CPU performance stable, we clocked all cores to 5GHz and kept them cool with a 240mm Eisbaer Aurora liquid cooler. Everything is powered by a reliable 1000W Corsair RM1000x PSU from Infinite Computing.
Now that you’re familiar with the premise, hardware, and our test equipment, let’s take a look at the results!
Nvidia GeForce RTX 3050 Analysis
-
Introduction and Hardware Analysis [This Page]
-
Doom Eternal, Control, Borderlands 3, Shadow of the Tomb Raider – Gaming Benchmarks Part 1
-
Death Stranding, Far Cry 5, Hitman 2, Assassin’s Creed Odyssey – Gaming Benchmarks Part 2
-
Metro Exodus, Dirt Rally 2, Assassin’s Creed Unity – Gaming Benchmarks Part 3
-
Control, Metro Exodus, Battlefield 5 – RT Gaming Benchmarks
-
Nvidia GeForce RTX 3050 – Digital Foundry’s Verdict