Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 11, 2016.
Oh yea! WTS
Idk, I think it's easy to say "nvidia is gimping cards" when there is no clear alternative. I think it's far more complicated then that though.
I wrote this in another thread but I guess I'll repeat it here. Lets go back to the 680 vs 7970. Both were released the same year, with the 680 generally outperforming the 7970 in most titles. Most games now though, the 680 just falls so far behind the 7970 that it's not even in the same ballpark. Why? I'm not 100% sure, but I think the answer is more complicated then "Nvidia downgrading Kepler lel". The 680 was a 2.5Tflop card compared to the 7970's 3.7Tflops. The 680 was also 2GB vs 3GB on the 7970. It's pretty clear that the 7970's hardware was just completely underutilized when it launched. You'll find that this comparison applies down most of the GCN/Kepler and even GCN/Maxwell line. Look at the Fury X compared to the 980Ti in terms of tflop output, memory bandwidth, etc. The Fury X on paper should completely annihilate the Ti. I wouldn't be surprised if in 2-3 years from now, when 4K is norm, that Fury X just wins in everything and the same argument is repeated then.
In fact I think all of GCN was just completely underutilized for a while now. You have to remember that GCN was a radical departure from the VLIW architecture before it, where as Kepler was just a refined Fermi. So from a developer perspective it kind of makes sense that out of the gate, Kepler had better performance overall.
Then you have the console argument. Most AAA titles launching now are the first games who's entire development process was focused on the current generation consoles. Which as you probably know, feature GCN based graphics cards. So obviously developers are going to target features that are beneficial towards that architecture type. For example, AMD cards lack the geometry hardware to do tessellation quickly. So as a developer, who is trying to push the best graphics on console hardware, you're going to tend to stay away from using tessellation, shadow volumes, etc as much as possible, and instead focus on say vertex shader based effects where GCN shines. The other thing that's becoming increasingly popular is compute based stuff -- not just async mixed graphics/compute but compute in general. Outside of Civ it's hard to find many compute based graphics benchmarks that aren't in OpenCL, but even with all the drivers up to date, a 780Ti performs worse than a 280x in Luxmark, where as all the Maxwell cards perform far better than Kepler. As I already posted here, a 950 is almost 3x as fast as a 770. So either Nvidia never bothered getting Kepler's OpenCL compute stuff going, or Maxwell is just much faster in Compute than Kepler is. And with more games utilizing compute in general, it could help explain why Kepler isn't keeping up with GCN/Maxwell.
And yeah, I do think that part of it is also probably Nvidia just not focusing on Kepler as much overall. Are they deliberately doing it? I doubt it. They have a certain amount of resources that they are going to distribute throughout their driver team and Kepler is obviously going to get less. I just don't think it's the sole reason and I think it does a disservice to AMD honestly. They a **** ton of flak, especially here on Guru3D about the console stuff. They also got a ton of flak about Mantle effectively being discarded, before the Vulkan stuff. And yet, it's those two things that I honestly think are the main reasons why we are seeing this shift in performance.
That made a good read Denial , and a lot of what you have explained does make perfect sense , from a Amd side their cards certainly are standing the test of time compared to nvidia , and it will be very interesting to how the fury cards perform in a year from now when Maxwell very well maybe be on it knees .
I just hope that Maxwell will do better than Kepler has done it terms of longevity as im fed up with upgrading graphics cards every year , it's enough already . Where as i gone for a couple off r290x back in the day i could off still been using them now with good fps
I will not be buying this new card from Nvidia not a chance and like i said earlier if in 6-months to a year Maxwell is what kepler is now then i'm done with Nvidia.
I'm sure that both sides were working with Microsoft to get as much info about DX12 as possible. I also don't believe that AMD thought of Mantle on their own. There were rumors about the capabilities of DX12 before Mantle was released and then when it finally was released, it was broken and didn't work which leads me to believe it was rushed to try and beat DX.
Well that makes sense! (not saracasm!)
looks intresting wonder what the asking price is gona be 400$+ my guess..
Low level API's have been around since forever and discussions about that coming to PC were around as far back as DX10. Most of the concepts for Mantle were already essentially built as a modified version of DX11 used in the consoles. It was like DX11 with low level extensions. Then Johan Andersson from DICE basically told AMD he wants that functionality on PC. I'm sure that initial DX11 bring up for Xbox heavily influenced Microsoft's decisions with DX12 as I'm sure they were heavily involved.
I do think both worked closely with Microsoft, as evident by Maxwell's DX12_1 features. Whether or not Pascal was too late into the design phase to fix the Async stuff is a good question, but I'm sure if it wasn't Nvidia will just figure out a way around it.
Same here. I have literally seen my 770GTX's framerate dip bit by bit throughout the past 1.5 years in BF4.
I'm guessing 25-30% improvement over same category today...
That's weird, each time I update my graphics driver I run game benchmarks and write them into a table (geeky I know, but I want to be sure I'm not downgrading!) & there's been no changes apart from rare slight increases. I test F1 2012, Batman Arkham Knight, Batman Arkham Origins, Bioshock Infinite, Metro Last Light, Tomb Raider, Shadow of Mordor. (Batman Arkham Origins saw a significant decrease in framerate upon moving to Windows 10 from Windows 7, but that was the only one) (Been tracking it since about June 2013)
It's not that you are losing performance that people are whining about, its that performance doesn't improve over time like AMDs cards do.
Wake me up when the 1080TI has some specs and a release date out in the wild.
Not interested in any other card really.
Oh Great.Now ive just bought a Zotac 980 ti the next gen cards are coming out:bang:bang:
They're going to call it GTX 1440 lol and the ti will be called gtx2160
AMD did develop Mantle by themselves (with some influence from dice etc). Mantle was also under development for several years, not just the time it was public.
MS was one of the companies that had access to all mantle related since day 1, and considering the timeframe, similarities and the fact there's gcn in xb1 there's no doubt that Mantle kicked off DX12 development and that both gcn and mantle affected how dx12 turned out to be a LOT.
Also, there was never dx12 being low level rumours before Mantle details came public
IMHO AMD GPU drivers were simply closing the DX11 performance gap they had (and still have) in relation to Nvidia drivers.
DX11 API draw calls performance increased around 50% in AMD drivers during the first 6 months of 2015 but it's still around 50% less performing than Nvidia drivers.
Indeed. The 980Ti is the best card I've ever had so the successor must be just as good.
If a 1070 or a 1080 can grant me 20-40% increase in performance of my r9 290 card I'll be happy.
I believe that May is way to early for a chip that they didn't even have a fake die to show off at its "unveiling". Also the complexity and signaling requirements of GDDR5X, will probably put its cost on par with HBM, but without those annoying AMD interposer patents to deal with. That of course, will probably kill smaller form factor cards, not that they matter that much.
In short, I expect "full" Pascal (not the initial 680/780/980 trap, neither the idiotic Titan), at around November.
does anyone have a good guess what this new card will be equivalent too as in a 970 or a amd 390 ?.....thanks