Single-core performance of Intel's Sunny Cove chips Surface - Shows Big IPC Increase

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 17, 2019.

  1. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,381
    GPU:
    GTX 1080ti
    Except, well, its not "unnecessary".
     
    Fox2232 likes this.
  2. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Transistor counts. Benchmarks shown. Clocks cards run at. Those are known. Put them together yourself as did I.

    @Denial : Gaming. Almost nobody here cared about ~30% higher compute performance of GCN cards of comparable gaming performance to nVidia's. nobody cared about absent FP16 on nVidia's side.
    People here judge gaming performance. Starting to care about total compute performance now would be more than hypocritical.

    When people did not care about power efficiency at times nVidia had frying pans, and then all hell broke when they had upper hand... I could overlook it.
    When nobody cared about AMD's tessellation and then it became important once nVidia had double performance than AMD there... I could call it coincidence.

    But both of us know that occasions at which people "changed" their minds were many. And that it is pattern.
    In reality people did not really changed their minds, they repeated same behavioral patter over and over again. nVidia set course, people did accept it, follow and defended it sooner or later.
     
    Goiur, carnivore, OnnA and 1 other person like this.
  3. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Why not? None of the RTX games currently use the tensors for denoising. DLSS at this point is vaporware - out of the 20+ games announced six months ago there's like 5 out and even in those games there are all kinds of weird issues with it. DLSS 2x is non-existent in any form. The INT4/8 performance is basically useless for anything else. I like RT but I couldn't care less about having AI cores on my GPU unless Nvidia turns it around but so far it's pretty unnecessary.
     
    fantaskarsef likes this.
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Would you say that for performance per transistor per clock for Vega 64 we should remove that 1B transistors because AMD failed to enable most of promised technologies and they are unused anyway?
    Transistors are there. Bad investment is still investment.

    And looking forward, do you think that nVidia will abandon RTX series and remove those parts of GPU?
    I personally think that they will put them to use and as they do, they will double them. Because only after they do, DX-R can deliver true wins. Those are achieved by games that can do multiple DX-R effects at same time with good performance at 1080p+.

    They do not have much choice now. As all RTX owners would feel betrayed as they paid for technology that lived very shortly and had no future.
    And they would risk that AMD comes in 2020/21 and brings something that can do those effects at same time well. Then planed win would turn into defeat.
    Maybe 1st major defeat nVidia would score ever. And it would be purely of their own making.
    I do not think nVidia would do that. They are smart.
     
    Last edited: Jun 17, 2019
    airbud7 likes this.

  5. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,650
    GPU:
    4090@H2O
    The main competitor for the 5700XT is supposed to be the 2070, right? They got 10.3 vs 10.8 million transistors for an average of 5-10% in benchmarks as shown by AMD's own slides.
    Produce it on the same node and that looks way different than you claim. This time I'm not so easily vouching for AMD. And don't forget, a big part of the 2070's transistor count is useless RT and Tensor cores as well.
    Clocks run do show a nice advantage for AMD, that's true, considerably less clocks for comparable performance, but then why aren't they simply clocking higher to destroy Nvidia then? Because that's not how it works as easily, or am I wrong here?

    Don't get me wrong, AMD has some nice chips / cards in store for their segment, but we shall see if they really are performing better compared to Nvidia when it comes to what those cards are, gaming cards.
    Turing on 7nm would probably still win over RDNA 1st iteration on 7nm. Sad but true, doesn't mean we all have to like it.

    And just to add it in referring to your post above mine here: Transistors are there, true. But then RT cores aren't a bad investment either, to your logic. :D
    Or they're both equally aweful investments, useless compute power and unused Tensor / RT cores, if you don't use them.
     
    airbud7 likes this.
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,381
    GPU:
    GTX 1080ti
    They aren't just denoisers.
    Turings entire fp16 is in those units, and more https://arxiv.org/pdf/1803.04014.pdf
    https://devblogs.nvidia.com/tensor-cores-mixed-precision-scientific-computing/
     
    Last edited: Jun 17, 2019
  7. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    "RDNA's performance per transistor per clock is better than Turing"

    Please reread. How does node change transistor count? How does it change performance per clock?
    What is node change supposed to change on that ratio I wrote about?
     
    airbud7 likes this.
  8. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,650
    GPU:
    4090@H2O
    Performance. That's the question. And while you are right, it does not change anything about what you wrote. Although the point still holds, how many transistor's of the 2070 really do work in such a comparison is the question. Might relevate some of it, but thinking about it, I think you might indeed be onto something there.
     
    airbud7 likes this.
  9. HWgeek

    HWgeek Guest

    Messages:
    441
    Likes Received:
    315
    GPU:
    Gigabyte 6200 Turbo Fotce @500/600 8x1p
    IMO we gonna see many "leaks" like this one, their main purpose gonna be:"plz wait,don't buy Zen 2, we have better CPU coming sooooon.......".
     
    Webhiker likes this.
  10. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Read my response to Denial's argument which you almost repeated. Be fair. Vega seen no kindness from me for its wasted transistors as AMD did not enable those promised function they incorporated.
    RTX is RTX and is going to be RTX going forward. Would you say that Turing had better performance per transistor per clock if they had 10 times as many tensors unused and actual performance of cards was 1/2 since most of GPU would be unused?

    It is either there or it is not. Both companies make various investments. Saying that we should not count HW media decoder because people use mostly software?
    It is there. Saying that investment towards new HDMI/DP standard should not count because people mostly do not have compatible displays?

    Edge of ignoring reality of existence of those transistors is simple. Either take those GPUs for what they are, or make excuses forever.
    That's why I was so critical towards Vega. I saw no reason to excuse AMD.
     
    Last edited: Jun 17, 2019
    airbud7 likes this.

  11. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    Fake since they come from intel. If they came from AMD those would've been 100% true
     
  12. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,650
    GPU:
    4090@H2O
    Hey, I am fair. Read my response to your last post, actually saying you might be right, for what little I do understand of that matter. Understanding what you mean, and how you reach the conclusion is aknowledging what seems to be the truth to me. I already said I don't want to bash AMD. Not sure what's more to add on my side besides what I already wrote. ;)
     
    airbud7 and Fox2232 like this.
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I know they are hence why I said replace them with dedicated FP16.

    Idk, I definitely acknowledged AMD had an advantage with compute and I stated multiple times that AMD was doing a good job shifting game workloads in that direction (hence Kepler's demise). But I never and I feel like most people don't look at performance per transistor but that's mainly because we haven't had this issue where die sizes for Nvidia are becoming comically large.

    I just feel like Nvidia is going to do a better job of incorporating their die features into standardized APIs and pushing games in that direction. DXR seems like it's going to be a big component going forward and WinML will run on top of Tensors. Mesh shaders and variable rate shading are both being implemented into DX12 and look like good features that I'm sure AMD will support in some regard.
     
  14. mohiuddin

    mohiuddin Maha Guru

    Messages:
    1,007
    Likes Received:
    206
    GPU:
    GTX670 4gb ll RX480 8gb
    For sometime reading posts here , i thought that i am in some "GPU" related thread. Lol.
     
  15. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Partially my fault I'll admit.

    Idk, honestly there isn't much to say about this news mostly because Intel plays games with the numbers. The last time sunnycove stuff leaked they were comparing 2400 to 3733 memory speed in IPC.
     

  16. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Unfortunately that transistor count matters. I came with this ratio quite some time ago. AMD was not really very bad at it even when they had quite weaker GPUs.
    Their GPUs simply clocked lower. It always shown same thing. And that's potential for high-end GPUs.
    While AMD did not go for those 20B transistors GPUs, nVidia hinted even more in past. And moment they introduced RTX series I wrote something along the line that they do what they always do. They shift target to direction that suits them.
    1st) something that AMD does not have
    2nd) something they can improve on for next 10 years (not so high fps in meanwhile as games could push easily heavier effects than GPUs can handle)

    And DX-R will require quite some transistors and improvements for delivered results per transistor per clock.

    While I did not like introduced raytracing from the start for those 2 reasons above, I did like all those other improvements which people mostly ignored.
    (Not saying you did. I am sure, we quite agreed on importance of variable shading rate.)
    Sadly DLSS for now proven me wrong in terms of IQ vs. performance. But it was good try and there may be better version in time.
    Rest will still play big role going forward. And AMD will deliver them as well. (Except DLSS which they probably use to their advantage with new "universal sharpening" + special sharpening.)

    But back to that ratio. It tells us what we can expect from even bigger chips as we can approximate how much lower clock it will run to stay within certain power limit.
    I am sure that once nVidia goes to 7nm, they will go to 24B transistors. And that's how they'll use improved power efficiency. Very powerful GPUs indeed.
    But due to investments towards DX-R, we are going to see worse performance per transistors per clock for traditional methods.

    And that's main thing why I did not like raytracing. Knowing that we do not have sufficient computational power on board. And sacrifices would have to be made.
    If we ever manage to have fully raytraced modern games, it will make me happy. But that's siren's song.
     
    Embra, waltc3 and Denial like this.
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    I'm finding it hilarious how this is a topic about next-gen Intel products and nobody is talking about it. Sure says a lot about how much Intel has captured the interest of customers.

    More on-topic though:
    I'm more intrigued that this upcoming CPU isn't yet another "Lake". About damn time.
    Not like it matters what it is compared to, since the IPC differences hardly changed since then. If anything, they got worse.
     
    Noisiv and Keitosha like this.
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Intel calls IPC : "Instructions per core" which translated to real tech words means performance delivered at rated clock.
    Skylake like i7-6700K has boost clock of 4.2GHz. That in their naming means chip clocked to 4.7GHz has ~12% increased IPC.

    If intel had not perverted IPC term, results they shown would raise even my eyebrows. But they leave me super bored.

    People say that they fake their results and AMD does not. That's misconception. intel picks scenarios that fits them same way as AMD does.
    That's why they compare it to discontinued generation. It simply clocks low out of the box. And has low officially supported memory clock.
    Pretty good marketing spin which will work on many unsuspecting victims.
    = = = =
    As for the product. Is here anyone who expects it to be still on DDR4? I expect DDR5 => New socket => MB, expensive memories, ...
    Platform cost will not be nice.
     
    schmidtbag likes this.
  19. Andy Watson

    Andy Watson Master Guru

    Messages:
    304
    Likes Received:
    177
    GPU:
    960
    Nvidia teasing new super cards and somebody "leaking" super Intel results.

    Must mean AMD is going to release some good products :D

    Always best for the consumer when three teams are close together and battling it out.

    Been a long time....
     
  20. iafro89

    iafro89 Member Guru

    Messages:
    139
    Likes Received:
    13
    GPU:
    7900XT
    Seems fake, but time will tell.
     

Share This Page