Anyone know of a confirmation that it will be a TI version before Christmas? I'm starting to dislike sli, and will go for a single card this time. But they are good enough until the end of the year, if there is about 90% chance for a ti version. If not i will just buy a standard 1080 (and regret it when the ti version arrives).
There is no confirmation on anything regarding the Geforce variants of Pascal. It's all conjecture at this point.
Indeed it is. And I were to guess we won't see a Ti until 2017. That's why I got a Ti now, so I can enjoy my games now and hop on the next Ti whenever
So a 1070/1080 ( x70/x80 or whatever they will call them... ) might just come out in September-October or simply in 2017 using GV104 chips since Pascal might just skip GeForce? Each way, It's still disappointing that back when I bought Fermi in April 2010, I got a GF100 for €350, now in Oct'14 I payed €370 for a GP104, cause they made the cheaper 100's cost as much as a my whole system back in 2010. :banana: :infinity:.
I agree that GDDR5x will be enough for gamers look what the 980 ti has done against the fury x in terms of performance and that was GDDR5 memory vs HBM1. Also wether or not the Pascal cards have GDDR5 or GDDR5x memory that don't matter to me that much however I'd rather have the overclockability of the Maxwell cards in the Pascal cards.
Clocks are the real interesting question, I don't think you'll be seeing anything remotely like the overclocking headroom available on maxwell, simply because it's an immature process , that Titan at 1480 boost clocks is really pushing it imo. I think 1700 is an acceptable conservative estimate; as high as GP104 will realistically go
They were both fresh on the shelves the moment I acquired them, 4 1/2 years and 4 generations apart, so isn't it logical!?
In an interview with Raja Koduri AMD implied that there wasn't enough HBM2 to go around for mainstream/gaming cards, and that's why they would use HBM1. My guess is that NVIDIA will probably hold HBM2 for Tesla or really high end Titans. That also means that they will need two memory controller designs, unless their memory controller can handle both HBM and GDDR5(x).
If you mean 1700 as in MHz core OC with Pascal, that's actually better than most Maxwell cards can go without (hard)mods, no? Everything above 1600 I've seen was LN2 or hardmodded Maxwell2 cards iirc.
Yeah but 1500mhz on a 1200mhz stock gpu is a 25% clock increase 1700 vs 1500stock is only ~13% Hence my saying we won't see maxwell level headroom
Ah sorry, my bad, somewhat missed the 1500 stock clock. Yes, my personal upgrade target, if I can hold it until then :nerd:
I know I can't hold out that long, especially since we might be looking at 2018. However, Pascal Ti must be impressive and hopefully none of the DP stuff. I thought I upgraded every other generation until I looked back and found I only skipped the 6 series lol. Anyway I can't wait to find out more about GeForce Pascal. I do think really care anymore what type of VRAM it uses, I'm more interested in plain performance numbers.
If Intel has shown us something about 16/14nm is that there is a hard voltage/clock limit (depending on the design), that is quite close to previous generations. I actually don't believe that both NVIDIA and AMD products will clock much higher than before, if they keep their design in the same vein. NVIDIA seems to go with "large Maxwell" for Pascal (that's how it looks for now, I'm waiting for an actual architecture digest), and AMD for another GCN iteration. My guess is that NVIDIA will have the clockspeed advantage again, unless AMD pulls an ace or something.
It will probably be higher, at higher thermals. Don't forget the 300W TDP for the 1480MHz boost clock.
Is there any extra hardware there, or the normal cards will simply have it disabled via a driver switch? Are they going to have actually different hw for the consumer cards? It would be interesting to see if GP104 exists.
No I mean DP computation consumes more than SP, and tdp is a worst case scenario. They're unlikely to just use software limit, probably laser cut or just remove dp units entirely. Basically having dp units on the card won't increase power consumption by much, just that doing dp compute is more power intensive because you essentially have 2x the data