AMD RX Vega Shown Against GTX 1080 at Budapest Event

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 19, 2017.

  1. Denial

    Denial Ancient Guru

    Messages:
    12,414
    Likes Received:
    1,655
    GPU:
    EVGA 1080Ti
    Die size matters for both cost and maximum performance. Larger die = more transistors = higher cost. AMD needs to be competitive, so the larger die just ends up eating into their margins.

    Nvidia jointly with TSMC designed it's own node 12nmFFN for Volta purposely so they can scale up to 800mm2 with GV100. Traditionally the limit is 600mm2. If AMD requires ~480mm2 for 1080 performance they don't have much more room to scale the chip up. 7nm will obviously help this, but it's clear that they are spending a lot more transistors than Nvidia for a similar level of performance. Also the die size number, ~480mm2, is without the HBM2 - just the actual core. That's why there was some initial confusion on the size, as GamersNexu5 and PC Per didn't know exactly where the core ended. Raja stepped in and confirmed the core is 484mm2.

    AMD plans on side stepping the 600mm2 limit by using an MCM design, similar to Eypc/TR supposedly with Navi. But the timeframe on that is 2019 and who knows if that will hold true or what other penalties an MCM design would incur.
     
    Last edited: Jul 19, 2017
  2. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,657
    Likes Received:
    499
    GPU:
    2070 Super
    Never mind the temperature. Keep it simple!

    You don't see the correlation of the TDP and the actual power consumption? After i have just proved that the two are basically the same?
    And you don't even see the correlation?
    Jesus Christ... You can't just say I don't agree, without saying why. You're trolling me now, ain't you :bang:

    You are saying Fury X can deal with 375W. OK I can believe that.
    But what number of Watts has AMD advertised and communicated to their customers as its TDP? DO YOU GET IT NOW?

    For the 3rd time: Which of my statement(s) is wrong, and why?

    Real world data:
    [​IMG]

    Exactly. Bigger die is better for power efficiency when compared against smaller die at fixed performance level. Simply because you can stuff more stuff, and run it at lover MHz with lower Voltage.
    Which makes Vega even more tragic compared to 1080.

    And yes its apples to apples, because Vega's ~480mm2 die is the size of the GPU alone, not counting interposer, or HBM.
     
  3. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,762
    Likes Received:
    2,204
    GPU:
    5700XT+AW@240Hz
    Are they? They are definitely not same for HD7970. And they are definitely not same for Fury X.
    One had TDP greatly overestimated. Other one greatly undervalued. Finding few which suit your theory does not make it confirmed. Especially if you want to make TDP correlation Here on AMD's GPU. because those GPUs and cooling solutions breaking your "rule" are from AMD.

    And I am sure someone with nV's GPU can find cases where cooling TDP was underestimated or overestimated.
    For me, TDP is number on paper. But when I look into AMD's vBIOS, I see there numbers which matter. Those are hard limits, moment GPU eats above that number there, it puts empty cycles to keep at given limit.
     
  4. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,657
    Likes Received:
    499
    GPU:
    2070 Super
    According to AMD, Fury X TDP is 275 Watts. Which is about as much as it consumes. Nothing under or overestimated about it.
    In Fury X case, it's your own theory of, contrary to AMD, blindly equating coolers capacity(375W) with GPUs TDP which gets you in trouble. You're the only one calling Fury X 375W TDP GPU.

    And stop quoting 7970 missing the mark by few tens watts as proof that best effort TDP estimation is not equal to power consumption. But damn... you claim something even more mind boggling - that the two are not even being correlated. LOL
    How do you think they come up with TDP number?

    Yes AMD was somewhat generous with 7970 250W estimate. Yes, they could have gotten away with 200W TDP. They missed. So what?

    For the 4th and last time: Which of my statement(s) is wrong, and why?
     

  5. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,762
    Likes Received:
    2,204
    GPU:
    5700XT+AW@240Hz
    You do not get it.
    HD 7970 200W GPU + 250W TDP = 80°C and some of users with cards shutting down due to overheat on VRMs. Maybe you remember my accelero + mod to it to keep stuff cool.
    Then Fury X with 275W GPU + 275W TDP = 50 ~ 55°C on GPU. Here clearly cooling solution performs much better than stated.

    If you want to confirm your theory for AMD, as Raja this:
    "Are TDP values provided for each Radeon actually meant as GPU Power Consumption?"
    "What is usual value deviation between GPU Power consumption and TDP value for cooling?"
    "What is usual deviation between TDP of cooling solution and value provided by AMD?"

    Only AMD can confirm your theory. And only AMD can confirm that it will be applicable to those RX Vega TDPs to some degree or more.

    But observation of GPU Power Draw + Cooling TDP + Temperature GPU reaches under given conditions say that there is quite some variance.
    And yes, it means values AMD provides can't be trusted. (I wonder what were your comments around time of release of RX-480.)
     
  6. Exascale

    Exascale Banned

    Messages:
    397
    Likes Received:
    8
    GPU:
    Gigabyte G1 1070
    They may be used interchangably, but technically they are not the same thing. From Intel's whitepaper on TDP.

    "Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design
    Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for
    processor thermal solution design targets. TDP is not the maximum power that the processor can
    dissipate. TDP is measured at maximum TCASE."

    AMD uses a different definition and method for determining how they rate the TDP of their processors.

    “TDP. Thermal Design Power. The thermal design power is the maximum power a processor can draw
    for a thermally significant period while running commercially useful software."

    In casual conversation its probably fine to use it interchangably i guess.
     
    Last edited: Jul 19, 2017
  7. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,657
    Likes Received:
    499
    GPU:
    2070 Super
    no strictly speaking they are not the same,
    but historically they are used interchangeably and for a very good reason, as I have argued through last 3 pages

    I'm out. Talking a dog for a walk :)

    PS
    at least Fox is now demanding an investigation in possible deviation between TDP and power consumption, which might indicate that now he thinks there might be a correlation after all. Allelujah!
     
  8. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,762
    Likes Received:
    2,204
    GPU:
    5700XT+AW@240Hz
    @Noisiv: You have theory. It may very well be good one. But by my standards it is lacking.

    Please, do following:
    1) Write down your theory in meaningful way, no supporting stuff, or evidence here. Just clearly understandable theory.
    2) Make a thick line.
    3) Write down supporting evidence.
    4) Evaluate if supporting evidence belongs to that theory or not.
    5) Evaluate if supporting evidence really supports your theory.
    6) Evaluate if supporting evidence is sufficient.
    - - - -
    Because from here it looks like you have simple theory that AMD's TDP values are around same as GPUs Power Consumption.

    But your supporting evidence consists of 4 nVidia's GPUs and one GPU from AMD. Even if I do not question that one AMD's GPU, it remains fact that your evidence is just ONE GPU. Nothing more. Please think about it.
     
  9. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    Let's see what this chip can consume when it's possible to OC it.(under water and without BIOS TDP limitations):

    https://www.pcper.com/reviews/Graph...-16GB-Liquid-Cooled-Review/Overclocking-and-C
     
  10. Valken

    Valken Maha Guru

    Messages:
    1,469
    Likes Received:
    79
    GPU:
    Forsa 1060 3GB Temp GPU
    No offense to Hungarians but one of them looks a lot like one of our company clients from Romania. :D Still nice to have such beautiful scenery at such an event.
     

  11. jortego128

    jortego128 Member Guru

    Messages:
    105
    Likes Received:
    13
    GPU:
    AMD RX 580 4GB
    @Noisiv and Fox: You are getting a bit OT no? Go grab a coffee and read some comments at W C C F. I guarantee you will kiss and make up.... :)
     
  12. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,762
    Likes Received:
    2,204
    GPU:
    5700XT+AW@240Hz
    I doubt either of us reads that sewer they call comment section there. I think about our current discussion more as an academic stuff. There is nothing to be angry about.

    As far as my look at Vega goes, it may eat anywhere from 200 to 350W. I just see no good evidence to support that AMD's provided TDP value approximately equals to GPU Power Consumption.
    I think AMD takes those values based on target market sentiment or whatever. Really not much of value to me. AMD's cards easily saturate TDP limit within vBIOS. So that's value I care about.
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,572
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    Fair enough. That makes a lot of sense.

    TDPs have always been very, very rough estimates from all brands. In fact I was doing some tests myself proving the wattage of various GPUs (both from Nvidia and AMD) and both companies sometimes over-estimate the wattage by as much as 40%, when using tests like FurMark. Companies also account for crappy ventilation. Despite the load being exactly the same, the wattage kept creeping up along side temperature. Once the GPUs reached their thermal peak, the wattage stayed put.
    The "so what?" is important, because it drives people such as yourself to use it as evidence the product is worse than it really is.


    When OCing is involved, everything gets skewed. Just look at i9 for example: those chips consume a hugely disproportionate amount of power due to thermal issues. Vega FE is already a hot card from factory. But, the 7900X is actually modestly efficient when you leave it alone. When you OC it, the efficiency plummets like crazy. There's a reason AMD didn't push Vega FE harder, despite the demand for the extra performance.
    To clarify: I'm not saying this is ok, but, I also don't think most people are going to buy a notoriously hot $1000+ GPU with the intent on overclocking it on a crappy reference cooler.
     
    Last edited: Jul 19, 2017
  14. Krizby

    Krizby Master Guru

    Messages:
    586
    Likes Received:
    1
    GPU:
    Nvidia Gtx 1080ti
    Just stop man, stay quiet for a little and read a little more:
    https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Space-Heater-Efficiency-511/

    Kinda at a deep end of your wit there comparing 7970 stock air cooling with Fury X watercooling. While watercooling is a more efficient way of remove the heat from the GPU to keep gpu temp lower, the heat output into the environments is still equal to the power the GPU uses. A gpu that uses 400W when under air or water would still heat the room as much as a 400W heater, is that too much to understand ?

    And a little more schooling, a cooler TDP only mean how much wattage it can dissipate, not how well it cools the GPU underneath it. 2 coolers with the same TDP can cool very differently if one has heatpipes and the other had none, or the TIM being used are different. Learn your stuff dude.
     
    Last edited: Jul 19, 2017
  15. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I don't disagree with your conclusion but my post was more related to refute "this GPU is not likely to consume 400W, even when OC'd".

    Well it consumes 400 watts and even more than that on regular basis under water and OC'ed.

    The big RX Vega watercooled (Radeon RX Vega XTX) will not differ a lot in hardware compared to the Vega FE watercooled apart the obvious half VRAM amount...and price.

    http://www.guru3d.com/news_story/ra...r_and_liquid_cooled_xl_xt_and_xtx_models.html

    Radeon RX Vega XTX AKA "gaming" variant is watercooled and can use more power and clock higher and sustain that clock.

    RX Vega 64 and 56 variant will lack that OC margin stable clocks and are a cut down version of big version.

    I doubt the 1080 contender at Budapest event ( and the rest of the world tour) is any other than the RX Vega gaming variant.Watercooled reference GPU. I call it FURY X 2.
     

  16. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,572
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    It may have been a little presumptuous to say an OC'd model won't consume 400W, but I recon the RX Vega is likely to be a bit less power-hungry than Vega FE, despite the similarities. Liquid cooling ought to reduce the peak wattage (but will increase the minimum wattage).

    Assuming "more power" implies reaching as much as 400W, I really doubt (or hope anyway) AMD isn't dumb enough to manufacture a product that exceeds 362W from factory, including the cooling solution. In case you're wondering, 362 is the 288W recommended limit of the 2x 8-pin PCIe power cables and the 74W recommended limit of the PCIe slot. In order to remain industry compliant, AMD needs to stay below that wattage.

    Yep, probably.
     
  17. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,657
    Likes Received:
    499
    GPU:
    2070 Super
    and because GPU is not able to continually accumulate heat, it needs to be able to dissipate entire 400W of heat, aka TDP=400W. VOILA! TDP=POWER CONSUMPTION (proved for the 5th time, this time with your help :D)

    My point exactly! Thank you AMD [​IMG]

    In the end neither AMD's "TDP" nor their "maximum power...", nor Intel's definition of TDP are not all that terribly precise(*), but the overall meaning is clear:
    Your cooling = N watts (aka TDP) needs to be able to get rid of entire power consumption=N Watts of heat.
    ==> TDP = power consumption


    (*) In physics when you deal with real world problem and when you try to make perfectly well specified definitions by academic standards , soon you realize its not all that simple.
    For example Intel does not even mention which workloads, and I know for a fact they don't mean so called "power viruses", which itself is an iffy requirement. OTOH AMD mentions this, but fails to include the case temperature which leaves the definition lacking...traditionally both IHVs give TDP in multiples of 5 or even 10 which obviously is not terribly sciency)
    Aka lets not be anal about it, when neither AMD nor Intel are :infinity:
     
    Last edited: Jul 19, 2017
  18. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,657
    Likes Received:
    499
    GPU:
    2070 Super
    Thx but nah... me and Fox are fine :D
     
  19. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    You presume AMD learnt something from 480 powergate. :).

    RX Vega Gaming Asic power and board power are 300 and 375W.

    In my book 375 is more than 362 and no very far 400...without OC applied.
     
  20. Exascale

    Exascale Banned

    Messages:
    397
    Likes Received:
    8
    GPU:
    Gigabyte G1 1070
    They're pretty sciency about it when they do their EDA and verification. Not so much when selling to consumers. Their server and supercomputer documentation is very precise and it is very different from the consumer marketing stuff. On the consumer stuff even Nvidia calls GDDR SGRAM "VRAM". VRAM hasnt been used since the turn of the century though, and it is properly labeled as GDDR SGRAM in their professional literature.
     

Share This Page