AMD Confirms It Will Launch Zen 3 This Year - We Haven't Seen the Best From AMD Just yet

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 23, 2020.

  1. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    It would be funny if Intel was just playing with nodes and $$ like Nvidia, and if they could move to 7nm at will.

    They're stuck with their foundries and litog. issues. That's not funny.
     
  2. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    I don't understand why you would use R20's score to show off, on a chip that most people won't get such an OC on, which isn't even impressive for what it costs... especially an entire setup. My 3900X matches that stock, with non-aggro baby settings like the balanced AMD power profile, where it's dropping to under 4GHz due to heat with my setup. Blows it out of the water with any small OC; 7500 score at a joke 4.2GHz which is not even worth using.

    What's holding back AMD's gaming performance is:
    1 - Latency, everyone knows this and Zen 3 more than likely does enough to remedy this.
    2 - Of course frequency, despite having higher IPC than Intel.
    3 - Games use very few threads to this day. Near all of them (all Japanese ones) still dump the majority of the load on a single core, while the other cores barely do anything in comparison or do straight up nothing.
    4 - Stupid buggy resource management on the software side which for me is an enormous hit as I've discovered.

    I did various tests just a little while ago at a BS 720p res (which no one uses and only exist as benchmarks for Intel) to see just how liming a 3900X can be even though that doesn't necessarily scale, certainly not linearly. The result? I discovered 2 critical things:
    -------------------------------------------------------------------------------------------------------------------------------------------
    1 - With the latest drivers (which includes updated power profiles), Windows update 2004, and Asus X570 BIOS ("v2" AGESA 1.0.0.2), I'm getting SIGNIFICANTLY higher FPS in every DX11 title across the board in those artificial situations than the past; their loads are now being distributed across more cores, so long as I use one of the two AMD power profiles. In the past AMD profiles resulted in lower performance. I'm talking about performance increases of 2x in frame rate, and giant ones even in real-scenario tests.

    I don't know if it's mostly due to any one of those things I mentioned, whether it was a big change in Win 10 update 2004, drivers of any sort, the power profiles being updated, or a big change in how everything behaves due to the new AGESA, but this is the biggest performance increase I have ever seen from an update. And no, it isn't a mistake or bug or something wrong on my rig's end; all DX11 games on any rig I've ever tried had most their loads bound to just a few threads. Now every one of those titles, with no updates, distribute the load on many threads despite not originally being coded to do so. This has to be some sort of translation of the DX11 signals, or resource management on Ryzen was bugged to the moon and beyond so hard that it locked itself to fewer threads in all DX11 situations until recently. DX11 titles without updates don't magically start using more threads. When/how did this happen and how is it not big news?

    The example I always use of crap DX11 coding: Nier Automata, one of the worst coded games in history which SE never patched due to a fan program that addresses obvious issues (resolution setting not working, frame lock, lack of control over the GPU sucking GI, etc), despite the fact that it itself has stability issues and has malware behaviour for piracy checks. The turd coder and his fanboys would parade around saying "pIrAtEs aLwAyS fEeL eNtItLeD" when it was his garbage program that resulted in SE never bothering to a patch a game that has the most issues I have ever seen in a mainstream title, and leaving in Denuvo as we now know was lied about and shits on stability and performance. The program resulted in higher piracy by leading the game to not being fixed, then targets pirates.

    The game was HARD bound to a single thread with 2-3 others doing almost nothing in any situation or rig I ever tested it on. So much so that I used a Vulkan wrapper to get INSANE frame rate increases in CPU bound areas at 1440p, with settings that are GPU bound most other places, and far fewer stutters/better frame pacing. Using a translation of DX11 signals to Vulkan, paying for that, gave me massive performance increases because of how hard it was limited by 1 thread. Took me from as low as 80 fps to my frame cap 143. Now? The wrapper doesn't work as well as before, lower FPS, and it has hitches which make it intolerable... meanwhile without it that 80 fps area hits the cap of 143 fps as well, is clearly loading more cores despite never behaving like that before, and despite still being heavily bound by 1 thread, has far smoother frame pacing, and on average holds higher FPS than being translated to Vulkan.

    That experience holds true in other DX11 titles; magically more cores being loaded.
    -------------------------------------------------------------------------------------------------------------------------------------------

    2 - Ryzen's resource management is still ultra retarded and has a serious bug, at least for me. I've found out that I get eye-watering increases in artificial CPU bound scenarios by... using only 1 thread from CCX1, with either thread of core 3 giving massively better results, but 1 CCX1 thread MUST be used. CCX1 is my 2nd fastest, almost as fast as my fastest, and this scenario happens regardless of what speed I set each CCX speed to, so it has to be a bug. Again, 1 of the 2 AMD power profiles must be used or it craps itself and gets the worst possible performance.

    DX11 example: In Trials of Mana's in-game menu where all 3 characters are displayed staring you down I'd get ~330 fps, post magical MOAR THREADS IN DX11 change, (over 720p, because I have the scaling at 120% or something) very consistently. I tried locking it to CCX1 & 2 being the fastest and to avoid cross-CCD signals aaand... not much faster. However any combination of 1 thread from CCX1 + any 5 cores outside of CCX1 results in going from 330 fps to over 500. Core 3 + CCX2 & 4 results in the most consistent frame rate which sits at ~550. Add a 2nd thread from CCX1 to that? Bam, back down to 330 fps. If I remove that 1 allowed thread in CCX1? Bam 330 fps again.

    That exact behaviour happens in every DX11 title I tried, even if I manually set CCX1 to 4350MHz, and crank others to crap: Using more than 1 thread in CCX1 results in relatively ass performance, and not using that 1 yields the same result. Using a non-AMD profile results in even worse regardless of which cores are assigned. I should make it clear that relatively ass performance is still always a galaxy beyond 144 fps.

    To add a cherry to the turd cake, I'm now getting noticeably higher fps in those scenarios with "stock" (my fabric/MC/RAM are always set to 1900MHz) settings which show frequencies averaging diddly squat in games... than if I OC to the max I get at 1.35v VDDCR/core voltage in the BIOS (resulting in 1.1v shown in Ryzen Master, I cannot find non-contradicting explanations of those 2 different core voltages, per CCX voltage controls that 2nd one in my BIOS) being per CCX 4350MHz / 4375MHz / 4200MHz (LOL) / 4275MHz. This is regardless of what combination of cores I use.

    So either the frequencies used in stock behaviour are not being properly displayed, or there is another bug while OC'd giving poor performance. I don't know if this issue is specific to me, or others can replicate this. I know no one who isn't lazy enough that they're willing to test it.

    And now I realize I have to stop because it's been 142,000 pages and no one is reading this so TL;DR:

    1 - Zen 1-2's behaviour in software is buggy as hell, holding it back. Someone please run some tests as I did and let me know your results.

    2 - Old benchmarks of DX11 titles are now likely deceptive, if my experience on all rigs I tried holds true, because of a massive change recently. Someone please tell me you know what's going on?

    3 - Games will use more threads in the future with the PS5 & XBSX being released unless they intentionally make it use fewer threads like with current gen often using less than 7 despite the Jaguar chip having up to 7 for games on the current consoles and MUST be using them because anything otherwise is impossible considering the results.

    4 - Zen 3 is coming for Intel's ass harder than most believe if any of the rumours are true. An increase to frequency + more well utilized threads + IPC increase + latency decrease which can be bolstered by + much faster mem support + better software & firmware support = I don't believe Intel 14nm+++++++++++++++lol++++++ chips can compete even in games.


    I'm sure there are more broken sentences and typos than I will ever fix in this.
     
    mohiuddin, moo100times and Fox2232 like this.
  3. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    Post picture of the results, and make the forum better :)
     
  4. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    Yeah I'm going to repeat all those tests across multiple games, with numerous combinations each, just because you're drinking the Intel Kool-Aid and are grasping at straws, brb with pics... :rolleyes: Or do you mean Cinebench R20? LOLLL

    Just for you, I won't post anything, I'll try to avoid unintentionally posting anything in other threads as well.

    Jesus Christ, don't believe me, keep buying Intel forever. I just made up that entire thing, that entire wall of text, every word, just for you. Intel good, AMD bad.
     
    Last edited: Jul 24, 2020
    Aura89 and moo100times like this.

  5. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    I love AMD too ;)

    Still using Threadripper 1950x and 3900x.

    Looks like I triggered something? This is the Internet, so picture or it did not happen ;)
     
  6. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    No. I made it all up, especially the R20 numbers which can be looked up since a billion people posted those, those are the most made up. Buy Intel forever.

    Also congrats on being the 2nd person I've ever blocked, your wisdom of Intel good, AMD bad, is too glorious for my lying mind to handle.
     
    icedman likes this.
  7. RavenMaster

    RavenMaster Maha Guru

    Messages:
    1,356
    Likes Received:
    250
    GPU:
    1x RTX 3080 FE
    I'm waiting for a chipset that supports DDR5 RAM before I upgrade my CPU.
     
  8. TLD LARS

    TLD LARS Master Guru

    Messages:
    766
    Likes Received:
    362
    GPU:
    AMD 6900XT
    -----------------------------------------------------------------------------------------
    Are you sure that thing is stable in Cinebench or other allcore workload?
    This is the first time i have seen a 10900k above 5400Mhz without using ice cubes, dry ice in the watercooling loop or running suicide burst runs.

    The score is a little above a stock 3900X (Guru3d results), so i think the 10900k is throttling in this benchmark.
    On another note, a 5600Mhz 10900k would be around 500W TDP, so the cooling solution must be a custom loop and min. 1000W powersupply.
    It is enough to buy a 3900x, a 570 motherboard and 16GB memory for the same price as your cooling solution alone, i am guessing.

    So the discussion about who is beating who is a bit weird when the comparison is between something custom made for 2-3 times the prize, against a off the shelf setup that is 2-10% slower and uses half the power.
     
    carnivore and Neo Cyrus like this.
  9. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    It's direct die cooling, and yes it's stable in Cinebench. I just posted CB result, because many here in this forum LOVES Cienbench :D

    This cpu is for gaming only. Nothing more for me, so if it's stable in Battlefield games, then it's stable enough for me. Looped Cinebench R 20 for 10 minutes, and it didn't crash. Will try Asus Realbench eventually.

    Using 1x360 radiator. The Supercool Computer direct die block is the key here. It's crazy good!

    More info about the block is here:
    More 5600mhz 10900k with direct die:

    My best 3900x CB 20 score is this: [​IMG]

    It's faster in CB 20 than 10900k, but it's WAY 20-30% slower in cpubound games. Both "max" overclocked. Main difference is latency and 10900k is running 4700c17 tweaked memory :D
     
    Last edited: Jul 24, 2020
  10. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    He knows, he thinks he's being clever, he asks for something he could easily look up while ignoring 100% of the point, knowing people don't keep screenshots of everything they do, and that I'm not going to waste any time on him. Ignore and move on. And for the record my 3900X stock scores over 7200, the Guru3D result was with slower RAM/Fabric/MC.
     

  11. JamesSneed

    JamesSneed Ancient Guru

    Messages:
    1,690
    Likes Received:
    960
    GPU:
    GTX 1070
    This news hurts even more now that Intel has delayed 7nm.
     
  12. mikeysg

    mikeysg Ancient Guru

    Messages:
    3,289
    Likes Received:
    741
    GPU:
    MERC310 RX 7900 XTX
    I'm dying to see what Zen 3's capable of, especially the 12C/24T and 16C/32T parts, I might just snag one of these to replace my 3900X (which would be set aside, I'm thinking of an mITX build for the 3900X). For me to jump to a Zen 3, the performance up lift MUST be at least 20% over my 3900X (for a Zen 3 12C/24T), then I'd consider it a worthy upgrade. IF I do go with a 4900X?/4950X?, I'd take my time with the mITX build because I'd be upgrading to a big NAVI, way too much to spend at one go on hardware (plus I looking to get a Tudor 1926 watch as well, so I'm gonna blow my budget).
     
    DeskStar likes this.
  13. DeskStar

    DeskStar Guest

    Messages:
    1,307
    Likes Received:
    229
    GPU:
    EVGA 3080Ti/3090FTW
    I beat that on my 3900x, so it has two more cores, but that just shows the clock speeds aren't that much of a consideration when IPC gains make the most.

    And this isn't a gaming benchmark. Not even close to a comparable one at that.
     
  14. DeskStar

    DeskStar Guest

    Messages:
    1,307
    Likes Received:
    229
    GPU:
    EVGA 3080Ti/3090FTW
    Guess it's all in ones perception.
     
  15. DeskStar

    DeskStar Guest

    Messages:
    1,307
    Likes Received:
    229
    GPU:
    EVGA 3080Ti/3090FTW
    Budget...... It sounds good when saying it, but never reflects properly on paper!.! HAHAHAHAHA.

    (Looks at wife) "what baby....?!? Must have been the shipping that made it so expensive.."
     
    mikeysg likes this.

  16. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    Is it a few fps in 1080p with 2080ti?
    If anyone care to compare in Tombraider ;)

    result1.jpg
    graphics1.jpg

    display1.jpg
     

Share This Page