After a couple of days NVIDIA announced its newest and finest graphics card named GeForce GTX TITAN Z, AMD confirmed that a successor of Radeon HD 7990 is en route to its launch. Maybe its code name is a bit familiar to us; "Vesuvius" was originally designate the name of a volcano located in Italy which destroyed the whole -then- city of Pompei, however now it indicates AMD's most powerful and outrageous (for all meaning) graphics card. It has been given a name following current nomenclature, R9 295X2. R9 295X2 has two Hawaii XT GPUs. There were some rumors that it would have two Hawaii Pro instead of Hawaii XT thank to several reasons, most of those converge into one criterion : Power efficiency. R9 290X, which is based on Hawaii XT, consumes significantly more Watts than its Hawaii Pro variant R9 290, while the latter easily performs 90 to 95% of the performance of the former. Simply, Hawaii Pro is more power efficient and due to Total Board Power (TBP) limitation, using two Hawaii Pro instead of Hawaii XT seems sound. But, as of today, AMD didn't choose to do so. Not only equipping power hungrier GPU for its coming product, but also AMD actually overclock said hungrier GPU thus makes it the world's hungriest graphics card. In fact, official TDP specification of R9 295X2 is 500W. This is a necessary evil of being the world's fastest gaming gear, especially for the company whose fastest GPU cannot easily overwhelm the competitor's one in terms of power efficiency. Anyway, since NVIDIA announced that TITAN Z will have 8 TFLOPS of single presicion calculation power and this implies it will have severely lower operating speed than current single-GPU flagship such as GTX 780 Ti or GTX TITAN Black, there is no doubt that R9 295X2 will retake the "single card king" title. The point we have to focus on is, by how far, R9 295X2 bests TITAN Z. Today, on disclosing performance and other characteristics (power consumption, thermal characteristics, etc) of R9 295X2, I also am going to try to answer that question by "simulating" TITAN Z by lowering the operating speed of a pair of GTX 780 Ti(to fit in 8 TFLOPS at base). (I also made a youtube clip for quick summarization, but not sure whether it (= link the url) violates forum rules or not.) Test system configuration is as follow. Note that every Kepler-based SKU in this table has both "base" and "boost" state in operating speed. Since FLOPS number is derived from their own base clock, fitting GTX 780 Ti into 4 TFLOPS (= 8 TFLOPS for two of them) means its base clock goes 711MHz whereas the booster remains unchanged. In case of GTX 780 Ti, max boost varies according to ASIC quality of its GPU and is about +150MHz. So our own "simulated" TITAN-Z has a range of operating speed from 711 to 861MHz thus yields 8 to 9.7 TFLOPS instead of being solid at 8 TFLOPS. In other words, it is highly anticipated that "actual" TITAN-Z will be slower than what we demonstrated here. Well, let's torture it. Here are performance summaries for each scenario(res * aa). Power consumption & thermal characteristic: So far, we have seen all the test results and may derive following pros and cons: <Pros> - World's fastest single graphics card. - Almost as fast as a pair of R9 290X or GTX 780 Ti. - Significantly less noise than R9 290 / 290X thank to liquid cooling solution. - No throttling during over-an-hour test. - Scalability : CF-able and only needs two slot width. <Cons> - Very expensive. (yet it's only a half of TITAN-Z) - Historic TDP. - May incompatible with some PSU. (each 8-pin connector needs to bear ~20A) - May incompatible with some chassis due to radiator installation. Thanks for reading. Have a nice day!
Thanks for the numbers:nerd: btw, did you benchmark all those or just used some sophisticated calculations?
I would say nice, but your not using the latest Nvidia beta drivers - which are golden for 780 Ti SLi Without these the review is not really valid... sorry! (I'm not a fanboy trolling btw, the drivers do make a big difference when comparing the two)
the drivers sure helped in few select games that were really cpu bound but surely 295x2 is pretty much faster then 780 ti sli in the resolutions where cpu bound does not happen
that table is a bit confusing because of the "Resolution" in the top left cell. Nice review but from what I saw the 295x2 had equal or better min fps in most games/settings. I am surprised at how little GPU reviews have evolved since my "glory" days of university (when I actually had the time to care). It's about time people started to include the median instead of just the average which is very sensitive to outliers (high and low spikes) and thus not a very good representation of gameplay by itself. The detailed plots of all the frames where one can easily spot the spikes are a big plus but these are usually deconvoluted to average/min/max fps, that in my opinion don't illustrate gameplay fluidity at all (even if the median was included). If any of you heavy duty benchmarkers wants to give me their raw benchmark data I'll be happy to make a few graphs to show you what I think would be a much better presentation of benchmark data.
Nice post, and it certainly does look that way. Unless Nvidia has something planned that we don't know about i'm not sure how Titan Z can justify $2k, nevermind $3k. Would the extra 2GB of vram make a difference at 4k resolutions, or are Nvidia planing more optimisations to the DX drivers.
It was cutting it too close for most reviews to use them, true and, I don't see why the Titan Z / how it can justify the massive price tag - AMD would be the obvious choice for a dual GPU single pcb unless theres something we don't know about the Z
Like most people here in this thread, I also own 780ti in SLI and personally speaking, I don't think there are games out there which justify such kind of investment in gaming hardwares :/ I mean, surely there is a supply of high end games, but supply in this segment is not good enough. Two or three top knot titles per year isn't plenty.
+1 The reason why many people including myself are still on x58 even though i have a 4930k its not needed tbh.
What worries me the most, its the fact that we came to "accept" and somehow "justify" $3000 and $1500 gpu's. Dual or not. :bang: Edit: I also wanted to add that most if not all reviews compare the 295X2 with stock 780Ti's. I wanna see them compare two custom 780Ti's. Now that would give the benches a different outcome.
If the 3xx series is indeed 20nm, then I'd be curious to see how the 390X could perform, because honestly I don't know why anyone would buy a 290X over a 290.
I totally agree, but I do see the true 4K hardcore folks with money to burn buying 2 each. Man I hope they don't...what a waste of money.