Nvidia’s Ampere architecture is well on the way to redefining what gamers expect from high-end graphics cards. With the RTX 3080, Nvidia walked the razor’s edge, delivering outstanding 4K gaming performance for a reasonable price of only $699. Today, I’m looking at the big kahuna: the GeForce RTX 3090 Founders Edition. It’s massive graphics card inside and out with an incredible 10,496 CUDA cores and 24GB of GDDR6X memory. This is the cream of the crop, but is it worth $1499?
Design and Features
The RTX 3090 is a big card in every sense of the word. Pictures don’t do it justice. It dwarfs the RTX 3080 at 12.3 inches long and 5.4 inches wide, and made me question whether or not it would even fit in my case with a front mounted radiator. Thankfully, there’s about an inch of room to spare for airflow and cable management. It’s also thick, taking up a full three slots. Otherwise, it’s almost identical to the RTX 3080, save for larger fans to accommodate the bigger heatsink.
Under the hood, the RTX 3090 is a jaw dropper. It features 10,496 CUDA cores and 24GB of GDDR6X video memory clocked to 19.5Gbps on a 384-bit bus. This brings the total bandwidth up to 936GB/s. The rated out of box boost clock excels to 1.7GHz, but with Nvidia’s performance enhancing (and automatic) GPU Boost, it’s not uncommon to see it automatically overclock much higher. Our sample routinely ramped up to just over 1.9GHz, which translates to several extra frames per second of performance. As with the RTX 3080, it features the latest version of Nvidia’s three-part RTX processing system composed of the Programmable Shader (rasterization), the second generation RT Core (ray tracing), and third generation Tensor Core, which combine for massive performance potential.
TFLOPS aren’t necessarily comparable between different devices (you can’t reliably compare Xbox One TFLOPs to RTX 3090 TFLOPs, for example) but to give you an idea of the performance potential the RTX 3090 offers:
- Programmable Shader: 35.6 TFLOPs (vs 30 TFLOPs on RTX 3080)
- RT Core: 69 RT-TFLOPs (vs 58 RT-TFLOPs on RTX 3080)
- Tensor Core: 284.7 Tensor-TFLOPs (vs 238 Tensor-TFLOPs on RTX 3080)
Of course, you shouldn’t expect your FPS to scale linearly with the increased CUDA core count. There absolutely will be an uplift if you’re coming from last generation’s 20-series, but how Nvidia has composed the second half of its CUDA expansion allows the cores to shift tasks, handling FP32 (shading) or INT32 (compute) depending on what’s needed at the time. Like the RTX 3080, how the cores are utilized and how performance scales will vary between games depending how it’s been programmed.
This level of performance brings with it a number of promises. As the top of Nvidia’s consumer stack, its product page promises “the ultimate gaming experience.” The company claims that it can both play and capture 8K HDR gameplay, a task previously unheard of. Along with that, you can connect an 8K TV and enjoy full-resolution HDR playback thanks to AV1 Decode Acceleration, which the company states is 50% more efficient than H.264. If you’re a creator, that massive frame buffer opens the door to working with 8K video files, previously a system crusher, and holding huge amounts of data in video memory. For 3D modeling, or simply work in multiple creative apps that press the GPU at the same time, a cache that large has the potential to dramatically decrease render times and increase workflow. And, of course, when you’re buying the best consumer GPU on the market, you expect the best gaming experience even under 8K, with all the bells and whistles enabled.
With all of that processing power, the RTX 3090 needs a hearty cooling solution to keep its temperatures in check and the new dual-axial cooler delivers. By all appearances, it’s the same design found on the RTX 3080 but bigger, and uses two fans to direct air both out the back of the card and into the path of the CPU to exhaust out the back of the case. Nvidia calls the 3090’s cooler a “silencer” and it’s easy to see why. The larger heatsink does an outstanding job of keeping the GPU cool, with a peak temperature in all of my testing of 71C and notably quieter acoustics than the RTX 3080. For most games, however, it would hover around 67-69C, and I could completely forget that it was even running behind my case fans. Performance increases while fan noise decreases? It’s true: Nvidia has simply nailed it with their coolers this generation.
Around the back of the card, we have three Displayport 1.4 connections and a single HDMI 2.1. Collectively, these can power up to four monitors for a maximum resolution of 7680×4320 (8K).
But enough with the background. Let’s see how it performed.
With such power under the hood, I was excited to put the RTX 3090 through its paces. Understanding that this card is uniquely positioned for gamers and creative professionals alike, I knew that I would have to expand the scope of testing beyond our current gaming and synthetic benchmarks. In addition to our current stable of tests, I also tested the 3090’s propensity for rendering tasks, video editing, and 8K gaming.
Starting with synthetic benchmarks, I ran the RTX 3090 through 3DMark’s Fire Strike Ultra test and Unigine Heaven to see how it stacked up against our larger crop of GPUs. In these tests, the RTX 3090 didn’t just lead the pack, it dominated.
With those out of the way, I loaded up 3DMark’s DLSS test. This test uses both ray tracing and DLSS upscaling to really push at the edges of what a graphics card is capable of with modern rendering technologies.
For the sake of time, I limited this testing to the RTX 3090 and RTX 3080 Founders Editions, as well as my RTX 2080 Ti sample, the Gigabyte AORUS Geforce RTX 2080 Ti Xtreme (which, it should be noted is factory overclocked and will run 5-6% higher than a reference 2080 Ti). At 4K, the 3080 and 3090 were neck and neck at 60+ FPS while the 2080 Ti averaged only 50 FPS. Since the RTX 3090 is marketed as an 8K gaming GPU, I also loaded that test. As a synthetic, it’s not representative of actual gameplay, but the card prevailed with DLSS enabled. The RTX 3080 was a literal slideshow and the 2080 Ti failed the 8K test within seconds.
Next up is the gaming benchmarks.
[widget path=”global/article/imagegallery” parameters=”albumSlug=nvidia-geforce-rtx-3090-gaming-benchmarks&captions=true”]
Looking at these results, the RTX 3090 is certainly king of the hill in sheer FPS, though perhaps not by as much as I might have guessed when the card was first announced. These charts capture 1080p, 1440p and 4K. I wanted to dig a little deeper into 4K performance in particular, so I narrowed my focus to the core competitive cards. Here’s how it stacks up against it’s nearest last-gen competitor in the consumer GPU market, the RTX 2080 Ti.
The results are impressive, averaging out to a 52% speed boost compared to the RTX 2080 Ti. This is especially true considering the RTX 2080 Ti I had on hand for testing was factory overclocked. With that in mind, the 3090’s lead would be even greater (5-7% roughly) over the reference 2080 Ti than what I found in my testing.
If Ampere has done anything, though, it’s turned the price to performance expectation on its head. The RTX 3080 outperforms the 2080 Ti in many games, while also retailing for $699 compared to the latter’s $1199 or the RTX 3090’s $1499. Here’s how the RTX 3090 compares:
In this comparison, the card’s edge slims substantially, dropping to just 13%. There is still an uplift here, but for most users the 7-18% speed boost isn’t going to be worth the additional $800+ investment for 4K gaming alone for most people. Let’s look at ray tracing performance.
As I reported in my review of the RTX 3080, 4K gaming with RTX and DLSS on at 60+ FPS is a real possibility. That’s even more true here due to the all-around higher FPS the 3090 offers. That said, while some games do appear to be more efficient compared to last generation, the sample size of available games is still too small to draw any hard conclusions. In terms of efficiency, I found the greatest gains over last generation with Minecraft RTX and Shadow of the Tomb Raider while Wolfenstein, Metro Exodus, and Control all remained very close. Compared to the RTX 3080, the percentages are all very close.
With that out of the way, let’s dig into the more unique capabilities Nvidia shared with this card, beginning with 8K gaming. Like most people, I don’t have an 8K display, so the results you see below were found using Nvidia’s Dynamic Super Resolution feature. This allows the game to render at a higher resolution and then downscale it to match your display: in this case, from 8K to 4K resolution. According to Nvidia, this does lower performance by 3-5%, so native 8K figures would be slightly higher.
Even though Nvidia said it, I admit to being surprised that the RTX 3090 can actually play games at 8K. Only last generation, 4K was still a challenging target to meet. The results in the chart above were taken with each game on its highest preset (Shadow of the Tomb Raider bumped to AF16X), with RTX and DLSS enabled wherever possible. Seven of the dozen games tested performed near or higher than 60 FPS.
In other words, the RTX 3090 is fully capable of 8K gaming, but with caveats. The games that performed best fell into two camps: either esports games or those that were enhanced specifically for 8K. Other games that lacked DLSS, like Gears Tactics, or had it but weren’t designed for 8K, like Metro Exodus, fell well short of the mark. Games that are naturally less taxing, like Doom Eternal, or that have received a DLSS patch, like Wolfenstein Youngblood, are astoundingly playable.
The game selection is slim, which is to be expected with 8K still emerging into the market, but these results show the potential for 8K gaming to become a real possibility as it permeates the market. If you had told me even two months ago that we would actually be talking about gaming in 8K, with ray tracing, at playable frame rates, I wouldn’t have believed you. That Nvidia has been able to pull it off is both impressive and incredibly exciting for how far GPU tech has come. Further, it highlights that DLSS may well be the defining technology of this GPU generation, should developers continue to adopt it.
[widget path=”global/article/imagegallery” parameters=”albumSlug=nvidia-geforce-rtx-3090-rendering-tests&captions=true”]
Next, I looked at rendering and this is really where the 3090 came into its own. The additional video memory is a huge asset with a meaningful impact on performance in professional 3D modeling and rendering. This was most clear in the Octane Renderer test where I completed the Three Head demo file, complete with ray tracing. Compared to the RTX 3080, the additional video memory allowed the GPU to cache the entire scene, never turning to system RAM to make up the difference. This alone dropped the rendering time from 327 seconds down to only 43. Comparing it to the 2080 Ti, we can see the impact of the additional CUDA cores and generational improvements Ampere brings, dropping the render time from 734 seconds to only 43. Twelve minutes, thirty eight seconds reduced to less than one. That’s incredible.
In the Blender test, the RTX 3090 continued to show improvements, but I applied my focus particularly on the BMW test, which is targeted specifically at GPU utilization. There, the RTX 2080 Ti rendered in 5 minutes 17 seconds what the RTX 3090 blew past in only 23. Again, mind blowing. I also spent some time testing the JunkShop render demo and was impressed at how seamlessly I could work in the live viewport even as the scene rendered in the background. The massive 24GB of GDDR6X video memory made this possible, as the RTX 3080 would crash the program under the same conditions.
Next, I swapped systems to my Ryzen 3950X PC and loaded up Adobe Premiere Pro. I put together a 4K video, 10 minutes long, with 12 animated transitions. Rendering the video resulted in only modest improvements: about 35 seconds faster than last generation. Compared to the RTX 3080, the render times were identical. This isn’t exactly surprising since video rendering also factors in most other aspects of your PC, so it isn’t GPU explicit. This was an important bench to include, however, because it illustrates that the “rendering” focus of the card isn’t referring to how quickly an editor will churn out an MP4.
That said, the RTX 3090 did offer a significantly better editing experience once I dove into a real video project. Like most video editors, when I’m cutting together footage for YouTube, I’ll hop between After Effects, Audition, Premiere Pro, and Photoshop depending on what the video requires. The memory buffer and sheer horsepower of the 3090 made that a breeze. On my 2080 Ti, I was used to slowdowns when rendering in one program that would make working in my timeline feel almost painful. That was much improved here and made the editing process feel much smoother.
The added frame buffer also opens the door to working with 8K footage. Unfortunately, I didn’t have time to test this for myself, but with 24GB of GDDR6X, it makes sense that this would be a much more realistic possibility than on consumer cards of the past.
All of this leads me to a few core conclusions. First off, pricing aside, the RTX 3090 is an incredibly impressive card. It offers the best 4K performance out there, the ability to play games and watch movies at 8K in full HDR, huge improvements to 3D rendering, and smoother creative workflows. It’s also cool and quiet with a peak temperature of only 71C in all of my testing (commonly 69C or less, and notably quieter than the RTX 3080, which was already fairly quiet). This is an objectively excellent graphics card.
Understanding that, it’s clear that this card is much more of a Titan than a Ti. The 4K gaming performance is excellent, but is close enough to the RTX 3080 that the extra $800 just isn’t going to make sense for most people. Where the card comes into its own is in professional workflows: 3D rendering, video editing with multiple apps and massive files, data science… These are the high points of the RTX 3090 and help to explain why it’s more than double the price.
In that way, even calling it the “3090” is confusing. Labeling it the “ultimate” gaming GPU (while technically true), leads one to expect a bigger jump in 4K gaming performance than what’s actually here, especially with how massive the leap was from the RTX 2080 to RTX 3080. There is absolutely confusion in the market over what this card is.
So here’s the answer: The RTX 3090 is a Titan by another name, and for $1000 less than last generation’s Titan RTX ($2499). It’s a generational leap over the 2080 Ti. It is not for the average gamer. It is for the professional 3D artist, the gamer who wants only the best, and for the cutting-edge technophiles picking up 8K TVs and needing something to run them with. It’s a different class of card than the RTX 3080 entirely. When compared against its real last-generation counterpart, the Titan RTX, even the price doesn’t seem that unreasonable.