Crytek employee says Playstation 5 will win, Xbox Series X has bottlenecks

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 7, 2020.

  1. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Yeah - both Microsoft and Sony have excellent engineers that definitely know what they are doing. I'm sure the explanations they give after the fact are sometimes marketing based -- for example with Sony, I'm like 95% positive they went with lower CU, higher clocks for price reasons - but they aren't going to say that, so instead they'll either shift the marketing to things where they have wins like the storage speed - then talk about all the benefit that brings.. or they'll get crytek engineers to talk about how clocks can sometimes be better than more CUs lol.. or whatever spin they'll do.

    But my point is that they definitely had an engineering trade off. It's definitely not "Sony made bet on AMD failing to improve GPU". Mark Carney didn't sit there and go "heheh I bet $20 AMD can't even improve it's next generation GPU so I'm just going to clock the crap out of it" he 100% was fully briefed on the architecture, it's limitations, it's performance characteristics, etc and made an engineering trade-off decision - again most likely based on cost.

    And I never really cared for those positions. In the other thread he was essentially making the argument that Nvidia delayed it's next generation GPUs because they opened the Xbox and the performance density of AMD's chip surprised them. I just don't buy that at all. They know TSMC's specification. They know AMD's architecture roughly, way better than anyone on this forum. They know where AMD is going to take that architecture way better than anyone on this forum. They can probably literally simulate it on one of their cadence machines. Like the idea that Nvidia is totally blindsided by what essentially looks like a pretty normal performance/density jump, with no new marquee features is just amusing to me. Multiple people on this forum and other forums predicted the size and rough performance of what we could expect out of the Xbox, 6-8 months ago but somehow all the Nvidia engineers were surprised? So they delayed their architecture by ~2-4 months to do what exactly? And now Sony made an entire console using the architecture including supposedly customizing a CU for sound processing but they made a bet on whether AMD improved the GPU or not? Just seems like such a weird argument to make.
     
    Last edited: Apr 9, 2020
    anxious_f0x likes this.
  2. Ghosty

    Ghosty Ancient Guru

    Messages:
    7,962
    Likes Received:
    1,177
    GPU:
    RTX 3050
    This video might interest you then. See if you can spot how well (badly) Minecraft runs with Ray tracing enabled.

     
  3. Loobyluggs

    Loobyluggs Ancient Guru

    Messages:
    5,221
    Likes Received:
    1,589
    GPU:
    RTX 3060 12GB
    I think you'll find the only point I made before semantic dragon left the cave and started breathing fire, was that Sony is an electronics company and Microsoft is not.

    Apologies if that was not made clear.
     
  4. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    Where do I begin with that, everything you're saying and implying is just bullshit. I don't know if you've actually gone full retard or you're just being coy, if you don't understand what I was saying that's your problem. Go ahead and pretend that's what it meant. No one cares.
     

  5. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    1st, Thanks for not using quote or @, so I would be notified. Then it is clear that you have not looked data yourself. There are numbers for transistor count around. It is not that I do not know them. It is that they do not add up. If you visit TPU database, you'll get a number that they put as GPU transistor count which they pair with entire chip area.
    If you used quotes, you would be proven wrong by quoting me. (And since then, they even reduced transistor count for GPU by mere 600M which can't ever fit in Zen2 chiplet, nor I/O.)

    So, tell me. How many transistors RDNA2-hybrid in XSX has at what area? How high is boost clock that does not result in crash according to Sony?
    If you even attempt to find out, you'll understand few things. And as far as Renoir goes, transistors/area number is just speculation for now. And you use it as fact, while I clearly say it when numbers don't add up. And on top of it Renoir does not use even full RDNA1, nor anything from RDNA2. And maybe it is that dense, but it is not important, because it is not RDNA2 clocking above 2,2GHz at all samples of PS5 without fail.

    And you ignored fact that if transistors count be taken seriously. XSX would have 43% more than RX 5700 XT, while adding 40% more CUs which are now able to do DX-R + INT8/4.
    When RDNA1 came, it had quite better count of TMUs/ROPs per transistor than RTX Turing and there was (reasonably valid) argument that RTX has specialized HW for some things. And RDNA1 had bit better ratio even against GTX Turing which had removed some of specialized HW.
    RDNA2 has that missing HW capability and still keeps almost same building blocks per transistor advantage.

    If you compare RDNA2 to RDNA1, you may come to something like Navi 10 replica in terms of ROPs/TMUs count:
    50% higher power efficiency (AMD's claim for RDNA2 over RDNA1)
    15% higher stable clock for all GPUs (PS5 official 2,2GHz+)
    5% more transistors. (XSX transistor count to building blocks ignoring 20% higher cache and other details.)
    RX 6700 XT would be 165W TDP card having 15% higher clock potential than RX 5700 XT with only 5% more transistors, and having all features that RTX Turing has. (Btw, I think that while RX 5700 XT is nice name, RX 6700 XT is ugly.)

    Last time I checked, nVidia does not reduce transistor count per building block much or often. And has to add those transistor costly blocks for AI/raytracing/... to increase performance.
    Do you want nVidia's next RTX card with 50% higher raytracing performance than RTX 2080 Ti? Well, you'll need 50% more blocks handling it as higher clock for that much beefier GPU is unlikely. And those transistors will do absolutely nothing for traditional rendering which is used by 95% of new games or more.

    As I wrote before, nVidia's approach of specialized units for DX-R is better for times when you want only basic rasterization and calculate every single pixel on screen via rays. It has control over how much of what they want.
    AMD's control is more limited as DX-R features are linked to TMUs and done via shaders. But their patents shown that they can make CUs which have more TMUs per shader clusters. (At cost of FP32 performance in such CU as that would be 1/2 of what would current RDNA1 has. But that's speculation at my part and maybe they already have way around it.)
     
  6. moo100times

    moo100times Master Guru

    Messages:
    566
    Likes Received:
    323
    GPU:
    295x2 @ stock
    I appreciate what is being discussed above. I do wonder if the difference in console design between systems is based on the long term objectives of each company.

    Microsoft has been pushing for consoles to become a more all in one space for both media and gaming for some time, more so than sony where between gens ps3 had more media functionality than ps4. Having used ps4, already is a bit clunky for media and needing to drop out of gaming entirely to run other functions on console is somewhat annoying in this day and age. I love(d) sony stuff for many years, but I feel they are failing to evolve their core market, and they have a habit of doing this with loads of their other products over the years - particularly pushing their own proprietary software and design against new mainstream standards and stubbornly to their detriment. It's why I stopped using their media devices, their phones and their computer hardware (they ended up selling their laptop division off due to reduced sales so I can't have been alone in thinking this). Least revolutionary of the 3 console manufacturers by far imo, though if they have best gaming experience they might keep their crown.

    Having 2 ram pools makes me think MS may want 2 functions to run simultaneously and continuously whilst console is in operation. I feel Asyantax's suggestion that it's for hypervisor might be on the money, if not then MS may use it to improve full media experience, and have a more "complete" gamer/streamer media overlay function compared to ps5. Gamer "community" features are now sought after for longer term business success and market protection - even google stadia is designed with this heavy integration in mind.
    As pointed out above, if this makes it harder to bring games to XBX then will be to detriment of MS in long run, but they have deep pockets to help facilitate this and have probably had this in mind for a while.

    Raytracing on both still feels like a necessary gimmick inclusion though.
     
    Loobyluggs likes this.
  7. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Again, you're missing the point... I'm not arguing with you about what the chip is or isn't - I honestly agree with you on most of it - i'm arguing about the fact that Nvidia couldn't predict any of this (read: surprised by xbox, delayed their own architecture by 2-4 months according to rumor) I'm arguing about the fact that in this thread you said Sony didn't understand the architecture (read: surprised by it, figured it would be the same as RDNA1). My problem with your posts isn't that you're wrong about the architecture, it's the basis of why you're making the post in the first place that I disagree with. Sony 100% knows what RDNA2 was capable of when it made the decision to clock higher instead of go wider. Nvidia 100% knows what AMD can get away with density wise while adding RT/INT8/INT4 - the last two of which they already did with Vega 7nm. None of these companies are surprised or misunderstanding AMD's architecture.

    Few other things:

    It's not speculation, it's from AMD's own slide on Ryzen Mobile Tech day. I wrote in my post it uses Vega. The point was to show two things - 1) the density of XBX SoC isn't outside the realm of normal, in fact it's not even that dense (which my point was to show Nvidia isn't sitting there going "wowee how they'd get such density!!1!1" -- also the lack of density is probably part of the reason why it can clock so high. 2) SoC's have normal than higher chip density anyway (I can speculate as to why but I'm lazy) so it isn't great to compare them to say a 5700XT.

    Renior has half the L3 cache - I'm going to assume the Xbox SoC has the same. (Which would be less transistors than just using a normal Ryzen 8 core desktop as the basis for transistor count)

    The SoC's share memory bandwidth to the GDDR6 on the Xbox - which means you're removing some number of transistors vs comparing it to a discreet GPU/CPU separately. Presumably they are also sharing other parts - again all of which would lead to a lower transistor count than just looking at two separate chips.

    I think I said above the 7nm Vega has INT8/INT4 - it's about 1.1B more transistors but it also has double the memory bus width (IIRC). I'd look into that.

    I'm not saying this proves you wrong or right, I'm just giving more info - fuel for thought.
     
    Last edited: Apr 10, 2020
  8. Dr.Puschkin

    Dr.Puschkin Member Guru

    Messages:
    172
    Likes Received:
    16
    GPU:
    RTX 3080Ti Suprim
    Yes I read the whole thing, my reply is not just a link is it now. Not my problem that devs choose to engage/circulate such forums.
     
  9. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,413
    Likes Received:
    3,079
    GPU:
    PNY RTX4090
    Are you crazy? With this sort of tech you think the devs wont use it for massive amounts of data streaming for insanely huge open worlds?

    having this sort of SSD will be a massive upgrade from current nvme standards, i am sure we only have something like 4 or 5 data streams to these drives, the ps5 has 12 lanes to the drive. then if they compress that data they can reach speeds of 9GB/s this is crazy numbers that also isn't bottlenecked by the low amount of data streams it has. then couple this with a discrete data decoder on the ssd controller which Cerny said is basically another Zen core solely for data reconstruction.

    this will allow for some insane game worlds or detail levels that the xbox series x could only ever dream of.

    check out this demo that was secretly filmed last year

     
  10. richto

    richto Guest

    Messages:
    114
    Likes Received:
    11
    GPU:
    2 x 7900GX2 GTX DUOs in Quad SLi
    Compared to what? The Hyper-V based hypervisor is one of the lowest overhead and most performant hypervisors on the market.
     

  11. richto

    richto Guest

    Messages:
    114
    Likes Received:
    11
    GPU:
    2 x 7900GX2 GTX DUOs in Quad SLi
    ~ 5GB/s vs ~ 8GB/s really isn't going to make much difference vs the XBSX ~ 20% GPU, CPU and GPU memory bandwidth advantage. Both are more than fast enough for real time texture streaming during games.

    The XBSX also has a GPU optimised architecture of 10GB of very high bandwidth memory primarily for GPU use @ 556 GB/s (24% higher than the PS5s 448GBs), and 6GB of lower bandwidth 336GB/s memory for primarily for OS / game code. That's an average of 473.5 GB/s so faster than the PS5 overall even if you average it. But its way more performance impacting for the GPU.

    Also we know that the XBSX can run at close to maximum performance continually whereas the PS5 has a limited power / temperature budget and has to run at variable frequency sharing budget between CPU and GPU.

    The demos we have seen so far already show that XBSX is extremely powerful. Sony didn't have anything close to this on show (switch it to 4K!):



    Faster nvme disk performance is very unlikely to compensate for lower all round performance otherwise imo. Much like the Xbox One X vs PS4 Pro, you are probably going to get higher resolutions and / or frame rates on the XBSX than the PS5.
     
    Last edited: Jul 6, 2020
  12. richto

    richto Guest

    Messages:
    114
    Likes Received:
    11
    GPU:
    2 x 7900GX2 GTX DUOs in Quad SLi
    Xbox OS is based on an optimised Windows 10 kernel. So the same but faster.
     
  13. Serotonin

    Serotonin Ancient Guru

    Messages:
    4,578
    Likes Received:
    2,036
    GPU:
    Asus RTX 4080 16GB
    I don't know how being better in terms of power means it will win? Xbox looks to have quite a few more exclusives this coming generation having purchased some studios. Also, xcloud and gamepass are game changers. Sony's remote play has been abysmal and they have yet to have an answer to gamepass. MS also has gamepass and xcloud for pc users which equals more revenue. Seems like a dumb statement. Also confusing, because everything I have read about PS5 says it's pretty much impossible for it to run at peak ghz for long. So odd to me that developers are bashing one system so early when they need both to succeed to sell their games. I plan on getting a PS5, so I'm not being a fan boy. I'll be the first to say PS4 was the winner this gen and Xbox One was basically a weak PC with very few worthwhile exclusives. Same with 360 honestly.

    But to be in the business of selling games on any machine that plays them, to openly put one down just ....dumb. You want to stay in favor of both companies and you want gamers to purchase your games ....it is crytek though so I shouldn't be surprised. They had a lot to do with PC exclusive AAA titles being ended after crying piracy for Crysis. Which was nothing more than a pretty tech demo. Epic is right behind them with ensuring PC was almost dead in the late 2000's. UT3 sucked, but it was piracy that ruined the game (which also sold horrible on ps3 but was always failed to be mentioned). So the fact these are the two companies pushing PS5....I'll wait for an unbiased dev's opinion.
     
  14. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,014
    Likes Received:
    7,353
    GPU:
    GTX 1080ti
    Lol, you're funny.
     
  15. XP-200

    XP-200 Ancient Guru

    Messages:
    6,394
    Likes Received:
    1,770
    GPU:
    MSI Radeon RX 6400
    Crytek employee says they can't make a simple remaster of their own game without messing it up. :p
     

  16. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,413
    Likes Received:
    3,079
    GPU:
    PNY RTX4090
    5GB/s v 9GB/s

    and sometimes 5GB/s v 22GB/s

    The difference can be huge if the data is properly compressed. PS5 also has dedicated hardware decompression meaning that the rest of the system is not effected by this task.

    Its all well and good having more bandwidth like the XBSX has but if they can't get the data to these pools quick enough from storage there is going to be a big impact.

    Also, the XBSX has completely separate memory pools each with different speeds, this will create nothing but headaches for developers who consistently complained about having to use the ESRAM in the 360 and the XB1 and also complained about the PS3 having split memory pools too. This lead to devs begging MS to bump up the 360 RAM from 256 to 512.

    Not to forget the XBSX also has different CPU clocks and even has different thread counts based on each clock speed.

    The Series X sure is powerful but it has too many hurdles for developers to overcome. PS5 has much faster storage, 1 fast memory pool, locked cpu clocks and locked thread count. It seems it's going to be much easier to max out the PS5 and get the most out of the system, leading to shorter development times and better looking games out of the gate.

    Also, the video you showed off was said by the dev him self to be running on PC hardware as that game is already out on PC (the first chapter of an episodic game) created by one man.
     
  17. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Can people get over it? Scenarios in which you need to flush data from VRAM and pull new from storage in way that there is real benefit or problem from not having 5GB/s vs having 500MB/s SSD are unrealistic at best.

    In all related threads nobody came with single sensible situation where you benefit from it outside of shortening loading screens time.
     
    itpro likes this.
  18. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    You really are hitching yourself right up to this antiquated wagon aren't you?

    You refuse to even entertain the fact that this could revolutionize the way games are developed and presented.

    No you just want things to stay the same so you can complain about stagnant Intel.

    I hope IF when the new consoles come out and prove you dead wrong you will pen an impassioned apology to all the users on here you have treated as though they were idiots.

    Now to this actual post.

    In the "Road the PS5" conference Cerny did give a very detailed example of a usage that wasn't just "eliminating load screens". He was talking of FoV rendering where the GPU would render only what could be seen. You can't currently do that effectively and that would boost performance on the GPU by only having it render what is seen.

    THESE ARE THINGS THAT GAME DEVELOPER ASKED FOR!!!!!!!!!!!!!!!!!!!!!

    So you are telling me YOU know more than the game developers that were working hand and hand with Sony to get the hardware they wanted to push their craft to the next level?

    So you (who has been wrong so many times and has a very bad habit on here of pushing your opinion as fact) are smarter than teams of developers that built some of the best looking game engines we have yet to see?

    YOU know more than Mark Cerny?

    Sorry Fox YOU are just a user sitting in your chair reminiscing about spinning rust and wishing to go back to a simpler time. You want so much for your current hardware to be "better" than the new "pesant box" that you are on here shouting down anything that YOU deem "unnecessary".

    The fact is for the next two years or so PC MAY be the lowest common denominator until 32-64GB of RAM becomes standard. 16GB of VRAM is considered mid tier, 3.5Gb/s SSD's are minimum spec along with 8 core with SMT CPU's.

    It wont matter if we have 24TFLOP GPU's if we cant feed them properly. The consoles both look to get rid of all the bottlenecks involved with the streaming of data. You seem to continue to ignore that, and it appears to be on purpose.

    Why?

    Are you afraid of the future?

    Why do you want so desperately to have things continue in the way they have for the last two decades?

    All you have done in these threads is $hit on the new hardware and anyone that is excited to see what it MAY bring. Why not give an example of how YOU would do it different and why that is so much more intelligent than what has been developed?
     
    CPC_RedDawn likes this.
  19. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Not much left from your post after scratching all those ad hominems. Leave them at doors next time.

    Now to 1st point (bold). It is BS. GPU renders what it is told to render. It has HW level discard to not render off-screen/hidden stuff. With new DX12 and mesh shaders, there are already software techniques that split geometry to small enough chunks that engine itself does efficient culling not of objects but small parts of objects.
    (You probably meant textures not needing to be in video memory when not in viewport. But that does not improve performance either.)

    To Underlined. No, Thats not fact, try co count available VRAM and RAM again on PC. It's not so hard. Look at your average game memory footprint. What resides in system memory and how much of it will therefore eat out of total memory pool available on console. Then at least pretend that graphics model on PC has something like shared memory available for GPU which can still be accessed much faster than storage in PS5.

    Rest of your post repeats same unfounded assumption that there is big problem with sourcing of data.

    Yet again, as with everyone, your posts is missing even theoretical example where you need to pull 5GB+ of data per second from storage into video memory for longer period than few seconds that would be impactful to user experience.

    Edit: And btw. I attack statements, ideas, not people. Historically, there have been groups of people who took offense against their ideals personally enough to create large scale crisis. One after another. But I prefer to not bring politics here. Because that kind of thinking is popular again does not mean you have to shift towards it or weaponize it.
     
    Last edited: Jul 7, 2020
  20. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    You keep asking for examples. There are currently none because we have been held back by current tech. The UE5 tech demo shows off asset streaming in the flight sequence (don't believe me then maybe you will believe this guy ).

    The Ratchet and Clank demo showed of almost instantaneous loading of a whole new world.

    I could see situations where we could get more diverse and denser environments in racing games become a reality.

    No one in here is saying this will be a magic bullet (that was you presenting a strawman), just that it's a step forward. The hardware in the consoles that is dedicated to I/O we may never see on PC but we may get it in a pseudo iteration with the big/little design of Alderlake where the smaller atom core are dedicated to asset decompression, streaming and check-in. While the bigger performance cores are freed up to be used for less mundane tasks in the game like more complex physics models, better AI that's not just a decision tree. Or just sheer core count on AMD. Lots of possibility's.

    Now I don't know how true this is but I've heard grumblings that the games that are currently being developed for PS5 exclusively would require a 16GB VGA and 32GB of RAM to simulate the asset streaming of the PS5 SSD but even then they were getting some problems. Again I don't know if this is true.

    I just think being cautiously optimistic is my more fun in this instance than being a stick in the mud.
     

Share This Page