AMD R9 390X, Nvidia GTX 980 Ti and Titan X Benchmarks Leaked

Discussion in 'Frontpage news' started by vavyn, Mar 14, 2015.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Yields are likely much better than 50%, both GTX 980 & Titan X have considerably lower transistor density than GTX780Ti and AMD's counterparts.
    That is one of reasons why it clocks so high without issues.
     
  2. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    I was coming from a 25-33% yield estimate for GK110, which leaves 50% an optimistic enough guess for me. GM200 and GM204 were custom tailored with 20/20 hindsight but even then we can still only agree they have optimal yields (without leaked figures).
     
  3. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Yeah I dunno wtf those graphs are, what I know is this:

    http://www.soitec.com/pdf/WP_handel-jones.pdf

    It states the cost per million transistors @ 28nm is $0.0140

    8,000,000,000/1,000,000*0.0140 = $112 per usable Die

    http://www.silicon-edge.co.uk/j/index.php?option=com_content&view=article&id=68

    25mm

    89 die per wafer

    Edit: Nvm. I get what is happening here.

    Ok a wafer cost $2634, 89 Titan X's can be created on a wafer but if that transistor number is accurate only 23 of them will be usable. Which makes the cost per die $112.

    So yeah I guess when they say "a run" they mean a full working batch (89) which would be about 4 wafers.

    To simply just for my own sanity:

    Wafer: $2634
    Gross Die Per Wafer: 89
    Yield % 26
    Net Die Per Wafer: 23
    Cost Per Die $112

    That's only accurate if 1. The $2634 price is a 300mm wafer and 2. The cost per million transistors @ 28nm is $0.0140.
     
    Last edited: Mar 24, 2015
  4. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Jones tells us we can get 2.6M transistors per mm^2 effectively, nVidia tell us they're using 13.33M transistors per mm^2 in GM200. That's a factor five difference.
     

  5. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    Not exactly true:

    GTX 980, 5200M, 398mm2, Trd=13.063 [M/mm2]
    Titan X, 8000M, 601mm2, Trd=13.311

    GTX 780Ti, 7080M, 561mm2, Trd=12.62
    R9 290X, 6200M, 438mm2, Trd=14.155
    GTX 680, 3540M, 294mm2, Trd=12.048
    HD7970, 4313M, 352mm2, Trd=12.252

    Regarding BoM:

    [​IMG]
     
  6. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Oh that's sad...they just fell short of a TRILLION transistors per wafer lol.
     
  7. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    But where are you getting 2.6M/mm2 from?
     
  8. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Under 'actual gates used' in Table 1 of Jones' paper. 2,610 Kilo Units/mm^2, so 2.6M for 28nm. But the gate density figures he's using are off for GPUs...14/16nm has a lower density than AMD and nVidia are using right now at 28nm. Of course I could be reading it wrong, but then again, I'm not about to start up my own GPU assembly line.
     
  9. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix
    since it's obviously not correct, does it even matter? :) Maybe that's some figure for some previous tech and for 99% success rate.
     
  10. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Come on...just get your stick. You know you want to beat around in the bush a bit as well :nerd:
     

  11. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix
    Actually I don't. :/ I find these type (die sizes, wafer yields) of discussions boring, I was just trying to shut it down and make you guys move on to memory and power section costs :)
     
  12. HellboundIII

    HellboundIII Guest

    Messages:
    4
    Likes Received:
    0
    GPU:
    Titan X 12GB
    :stewpid: :boob: You expect the preexisting 2x295 to become 150% faster than the 980 sli'd? thats like saying if the 980 sli'd will run at 8ms then your 2x295s will run at 2ms, which is bs... 2ms is 500 frames per second and 8ms is 125 frames per second you dope. It wont run 150% better, or even 20% better, in the end it will most likely be outbeat, drivers can't make that big a difference.
     
  13. HellboundIII

    HellboundIII Guest

    Messages:
    4
    Likes Received:
    0
    GPU:
    Titan X 12GB
    I also forgot to mention, that the Titan X does NOT have memory problems. It has a different architecture and does NOT have the same memory design and including the fact that it wasn't the design that caused the problem. As of now THE ONLY GTX card(s) that have the memory problem is the GTX770 and it was only from 4 to 3.5gb. NOBODY has had any problems with the memory except for liars (and don't know anybody that has claimed it either) so nobody needs to worry about any nvidia card except for the 770 to have the 3.5GB problem. I am also pissed by the fact that the titan x runs physx UNLESS it is disabled, so nobody said it was disabled in the bench and including the fact that the physx makes your graphics and rendering SUPREME with the cost of quite a lot of performance. ANYONE saying that physx is "no big deal" Has not tried out getting a gpu that performs equal to an amd card "while physx is on" to see the difference. I have, because I had the money. I tried out the r9295x2 alongside my titanx, the benchmarks are quite different when you take off physx :) as in different. I MEAN THE TITAN PERFORMED OVER 12% better in most games with all the same settings! in crysis I did over 10% performance increase all on ultra (no anti-aliasing) and on far cry I got a 8% increase, bf4 i got a 14% increase! Shadow of mordor (all the games were ultra with no antialiasing @4k) I got a 7% increase on shadow of mordor. I played minesweeper and got a whopping 0.01 fps total! and the r9 was only doing 0.009 :O. I also played Skyrim with ultra settings and the advanced texture addon. I also had 25 mods loaded and I got a 7.5% performance increase. Now I have done a few other things with the two, but all in all the performance was BETTER. Now the even BETTER thing about the titan, was the fact that the quality was better, the fill rate was better and the read/write and memory clock was better. This meant that graphics were able to be processed better at quicker rates and the extra vram made a huge difference too! Especially the usage. The Titan X used more vram, without gsync or vsync* for the same things as the r9 and that implies that it was caching older textures or forcing itself to buffer and rebuffer the textures at the best quality for the price of vram. Some games like shadow of mordor would get the fps down to 40fps minimum and about 38 for the r9, but although it would get down to 40, it would not stutter, it wouldn't have lag even though it was at 40* it was like playing at 60 fps at 40 fps, this may be because of the read/write and memory speeds. The extra vram usage may have also affected it by keeping thee best texture quality and keeping most to all the Textures pre-loaded and keeping previously needed textures temporarily, I can't confirm this, but as of firmware drivers and benchmarks. It definitely made an impact on the vram usage, by over 5% more usage than the r9 in general load (on the desktop) and 10% or more on most games. I would go for an amd r9290 for price and quality, but Titan X is the best way to go, if you are looking for a single GPU with the smartest technology and graphics/performance and with the cost of quite a decent amount of performance, you can play with better rendering, 3d animation and overall general quality/looks with a more immersive experience. The temps max at about 84C reference, no matter how hard you work it, but it can reach 86/87C with quite a decent overclock/+200/+250 core and +500 memory... 87C with 250 core and 500 memory, although I bet you can clock over that, I wouldn't do it... I returned the r9295x2 after getting it, because of how bad it would dip with workstation dev programs although I don't really use them. It wouldn't just dip, but the all in all quality wasn't where it was at for me. I need a card with vsync and no dips or stutter, which it had @40fps. The r9 performed good when it was at 40fps, but it had some stutter, but the titan x at 40fps had no noticable stutter. Just simply top of the line. Even with physx on the titan x doesn't stutter and it doesn't stutter with vsync off, with it on it would most likely not make a differece, maybe better, but it doesn't stutter with it off, but if there is stutter that isn't noticable vsync/gsync would most definitely take that away...
     
  14. SuperAverage

    SuperAverage Guest

    Messages:
    247
    Likes Received:
    2
    GPU:
    Gigabyte xtreme 1080
    WTF, first of all HELLO WALL OF TEXT.

    Second, why do you keep mentioning the 770?

    Sorry, cant be buggered to read that eyesore with obvious typos and/or fud.
     
  15. Fender178

    Fender178 Ancient Guru

    Messages:
    4,194
    Likes Received:
    213
    GPU:
    GTX 1070 | GTX 1060
    :stewpid: Its the 970 not the 770.
     

  16. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    http://en.wikipedia.org/wiki/Free_writing
    Next time He'll do better. If not, then it edges with psychosis.
     
  17. evasiondutch

    evasiondutch Guest

    Messages:
    207
    Likes Received:
    0
    GPU:
    2x-MSI gaming 290x 4gb OC
    See a few replys up what he said.


    What a load of crap from Nvidia fan:puke2:
     
  18. RavenMaster

    RavenMaster Maha Guru

    Messages:
    1,356
    Likes Received:
    250
    GPU:
    1x RTX 3080 FE
    These benchies seem a bit inaccurate to me. The 980ti is said to have the same spec as a Titan X but with only 6GB. The 980ti also has slightly higher clock speeds. That said, the 980ti should come up trumps in most benchies coz the 12GB would only become a factor when using multi-monitor setup or 4K. So 980ti should beat a single Titan X 1v1.
     
  19. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    ^pot


    HellboundIII

    ^kettle


    Word to the wise. Those who live in glass houses should not be the first to let loose stones.
     
  20. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,251
    Likes Received:
    232
    GPU:
    EVGA GTX 1080@2,025
    Way back when AMD and nVidia jumped the gun and moved to 28nm when it was not ready for prime-time, the cost per wafer was around $14,000. Due to FAR better yields and faster manufacturing speeds, they're down to around $4500/per wafer. I'm referring to the 300mm size obviously. I read a while back that both TSMC and Intel have backed off their timelines for 450mm silicon and we may not see it till 2020.

    TSMC is yielding around 70% on 28nm right now.
     

Share This Page