Radeon RX 5700 series review leaks out at Polish website

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 5, 2019.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I have seen an article about intel, and it had:
    Man himself made most "stupid" title he could. There is no such thing as Moore's law...
    You do believe in 30 years of something without having actual data on things you make assumptions about. I do not have them either.
    But maybe you would like to do some work. Like finding out actual revenue of TSMC. And actual number of shipped wafers plus something little as this:
    [​IMG]
    I'll help you bit more:
    full 300 pages report from TSMC for 2018 states:
    - Capacity: million 12-inch equivalent wafers
    - 37% > 28nm wafer revenue
    - 63% ≤ 28nm wafer revenue

    But even w/o knowing details, you can see that their 7nm revenue exceeded 16/20nm half a year ago. Question is, do they make more 7nm wafers, or do they ask more per wafer?
     
    Last edited: Jul 5, 2019
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Could be both, either, or neither:
    The more you shrink the dies, the more you can fit on the surface area of the wafer (remember, they're circular). More dies per wafer means more usable product, and more product means more sales. It also helps that this means there's less wasted material.
    So, shrinking the transistors is one way to cram more dies per wafer. But, since AMD seems to have the bulk of TSMC's 7nm workload, AMD's modular design (at least for Ryzen) also allows for smaller dies. So, both Ryzen's architecture and TSMC's die shrink probably have a very substantial increase in total usable product per wafer, all without having to pay for more materials.

    It's also possible TSMC just simply figured out how to reduce errors in production.
     
    Fox2232 likes this.
  3. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Yes, but what you write about is performance per die size. @Aura89 's argument was more towards simple die size to product price across multiple nodes without knowing actual cost of those dies.

    7nm can fit twice as many transistors to same are. But where is written that this area is not 3 times as expensive?

    Here is one thing to add. Who had leading edge in chip manufacturing in the past and therefore could dictate prices? Intel. Who has it now?
     
  4. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    What do you mean "RT cores are actually cheap" ? I mean just go look at series 10 from nvidia, 1080Ti is still considerably faster than 5700 and 5700XT, 12B transistors, vs 10.8B.
    No i'm not that type of guy, although i bought a 1060 2 years back, which for now hasn't given me any problems on 1080p, with max, or close to max settings, on pretty much anything i can play. I agree a 1070 would've been more future proof even for 1080p, but i didn't want to spend more than what i did on a videocard, i intended to buy just to play some games i was interested to, and wasn't aiming to keep it for more than 2-3 years. 5700, 5700XT and 2070 are both overkill for 1080p, and there's not even arguing there. Barely 130fps? Are you joking? 130 are much more than needed to have a perfectly enjoyable gameplay, unless you think you can see past 165Hz of refresh rate, and actually have one of those BS monitors with 240Hz, in that case i don't even want to talk honestly, because it would be pointless. Who cares about DXR honestly, that's another matter, those tests were made with RTX off, and classic raster gameplay is the comparison everyone uses. Not even 2080Ti is capable of stable 60fps on 1080p with RTX enabled. 2070 and 5700, 5700XT are solid 1440p cards, and probably decent, acceptable 4k cards.

    Well it's not the transistor itself increasing performance, it's the fact they'll most likely increase the number of transistors keeping the same die size, while increasing performance (more transistors) and keep consuming the same or less compared to the higher node, architecture makes a difference, but performance is mainly done by increasing the number of transistor, not changing architecture, the architecture will only do this much to improve that, it won't increase performance that much. Another thing which makes possible to improve performance is to optimize the process, making it consume less, and being less volt hungry so that frequencies can be increased, and it's possible to improve the die configuration, which is also partly thanks to an improved architecture. But still the main margin of improvements is still in the hands of silicon.

    7nm is no holy grail it's just a smaller node, which should allow for more transistor and hence more performance, and it did, AMD reached nvidia's performance, but nvidia is still at 14/12nm, and they'll have another jump forward in performance once they go 7nm, there's no doubt about it, at that point AMD will seriously need to rethink completely their architecture and such, because at that point they'll have the same silicon, but one has a proper way to use it, the other doesn't and loses performance and efficiency.
     

  5. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    1080Ti does not have remotely adequate compute ability against most of later GCN generations. nVidia could not go on with their new features w/o beefing all the compute capabilities.
    (Before Turing, nVidia used minimalistic SMs... saving transistors by not having FP16. That would be game over in 2 years... You'll be able to compare new games by then which will be made for Turing/Ampere or RDNA. They will drop Pascal and older nVidia's GPUs under contemporary GCN which were previously comparable in performance.)

    As for cheap RT in terms of transistor count. You have around members which would argue that nVidia can easily fit in many more RT cores. (Not myself, I think that truth is somewhere between them and yours :
    As for BS 240Hz monitor. You can see it in my signature, can't you? And I can clearly see difference between 120, 160fps and 240fps. Not that I can enjoy last one very often due to weak GPU which requires sacrifices even on 1080p.
    (I'll tell you secret. Turn off motion blur.)

    And I do agree with your notion of sustainable DX-R on 1080p even while there were some optimizations over time in particular games. But our original point was about actual cost of chips used in cards. Going forward, nVidia will most likely increase RT capability because as you wrote, performance sucks.

    And while I do not need those DX-R effects unless they bring something that looks really incomparably better. (Raytracing potential is same as moving from DX 7 that had no shaders, just T&L to DX8 that had them. But I have seen better looking real time rendering w/o DX-R than with it.)
    It would be foolish of me to expect that as DX-R got out of the box, it will be put back. No, it was nVidia's master plan for next 10 years. Something they can gradually improve to get people refresh every year as now "traditional" rendering came to point where card like 1080Ti would have lasted for many years for 1080p resolution. (That's likely reason why nVidia shifted paradigm again as 4k was no longer target.)
     
    Last edited: Jul 5, 2019
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    That's tremendously generalized. There are many ways to affect performance using the exact same amount of transistors, but, that all depends on the architecture. For example, you can opt for larger pipelines in place of more cores. Or, a larger cache in favor of advanced instruction sets. If just simply throwing more transistors at the problem was the only thing necessary to get better performance, we'd be living in a very different world. If architecture didn't matter that much to affect performance, architectures like ARM or RISC-V wouldn't exist, and the Radeon series would be faring better. If architecture didn't matter, we wouldn't be discussing how Nvidia is still ahead despite being on a larger node.
    The Radeon VII has a very similar transistor count to the 2080Ti and uses a smaller node, and yet, it falls behind in gaming. You can't blame the drivers because Linux uses totally different drivers and the performance isn't really any better. The VII performs great in non-gaming tasks. The architecture is everything.
    Yes, that's all true, but that doesn't directly correlate with a transition to 7nm.
    I'm not saying a die shrink doesn't do anything, and Nvidia is probably going to get a greater lead once they transition to 7nm. But it's not going to be that big of a difference. It won't be enough of a difference where the transistor size should take most of the credit.
     
    Fox2232 likes this.
  7. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I do wonder about this part. Historically GCN has not only power hungry, but had trouble reaching high clocks. That was nVidia's easy way of outperforming AMD.
    But now there is RDNA that clocks to 2GHz on 7nm. I really wonder what is clock limit which nVidia can reach. (Or on 7nm EUV.)

    It is quite possible that nVidia will not get much higher clock than they already have and will instead result in taking power efficiency route and making GPUs with even more transistors till they fill power draw budget.
     
  8. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    "1080Ti does not have remotely adequate compute ability against most later GCN generations" I'm not sure i understand correctly what you mean, but it sounds like "if GCN had a 1080Ti it would be much faster than the 1080Ti" correct me if i'm wrong, but if you meant that, i don't know what to tell you honestly, GCN isn't just capable of doing many things nvidia's architectures have been able to do these latest years, i think that's quite clear, so i'm pretty sure GCN only had to learn from nvidia, and nothing to teach, RDNA is pretty much the same but for some reason you, and some other are convinced it's completely different and will do special things, if these special things are Navi 10, i'm not sure they're so special, they're good, but not special, and probably still not nvidia's level, i mean AMD is yet to reach 1080Ti performance even with these 2, hopefully they'll manage with 5800XT or 5850,because after that they have still 2080Ti to compete with, and if those recent rumors are true, they are planning to compete even at high end level, so please don't tell me they don't plan to, because this time they probably do, and if this is the appetizer it isn't looking great. That feature that goes with the community name of "fine-wine" is something which isn't consistent, and it's nothing AMD planned on, it's something nvidia probably planned on, probably gimping or faking results to make people buy their new cards, but again it's nothing consistent and it happened with 2 cards, and possibly thanks to the higher quantity of memory AMD equipped their cards with, compared to nvidia. I think Turing is pretty much Pascal on steroids with some optimizations, first of all better silicon, and surely some new features, but still it's probably less of a jump than GCN > RDNA, i have no idea how many RT cores they can put into those dies, but still they'll take their space and limit classic performance.

    No honestly i didn't notice your signature, i never read signatures, because i don't care and on this forum layout aren't really that visible but anyway, 240Hz is BS, FPS and Hz aren't the same thing, not even in this case, i can see how you're able to see the difference between 120 fps and 160, but you won't see even close to the same difference from 120Hz to 160Hz even if your system is capable of those, simply because not every game works the same, if you take a Source Engine game and play it 60 Fps capped, or 100, or 120 there'll be always something weird with the game, even on a 60Hz monitor. Some years back i played much competitive Team Fortress 2, and there was (there still is actually) people using all kinds of cfgs to run their game the smoothest possible, some of them even started using DX8 (the game allowed it) in order to see less superficial effects and maximize the fps they had, even if it was already far far above the refresh rates of their monitor, and there was someone affirming that in order to achieve the best possible smoothness on that game on a 60Hz you should've reached around 300 fps (the game's default cap, but you could go past it too), and stay there, not drop too much, because even if you dropped to 200 fps you would notice it, and i can confirm this, the game was not that smooth even with double the fps you monitor could show. This is to say that it greatly depends on the games, but on most of them you can see the difference because you start from a higher framerate, and you have some kind of a bigger margin to drop, basically, dropping 30 fps from 160 isn't a huge deal, dropping 30 fps from 120 it's surely something you'll notice much easier, even on low refresh rate monitors, the reason for this is obvious - you're getting closer to what human eyes are capable of seeing with not that much of a difference, difference between 60 and 120 is huge, difference between 120 and 165Hz isn't even a fraction as noticeable, and above that the difference gets even thinner, that's why 240Hz is BS, because granted they're real, you won't benefit at all from them since your limit is way below that, so the difference you might see is almost imperceptible.

    I'm not saying architecture doesn't matter that much, i'm saying silicon matters more, it's architecture being built around silicon, not the other way around, so these architectures here are all build with a specific node in mind, and when swapping node it's obvious you won't get the same benefits compared to an architecture build around that new node. The silicon is everything, the architecture is just what comes after. Anyway i have to be honest, when talking about architecture i was referring to it on a more "software" way, and i wasn't thinking it actually includes everything, but still my point about what's most important stands. nvidia has had the advantage for years, because yes, they're architecture is better, but their silicon has been better for years too, GloFo was never on par with anyone really, they probably made the worst silicon of these latest years.
     
  9. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    I know that's what you're saying and I'm telling you, that's not the case. The silicon really does not matter more.
    That's contradicting your point. To paraphrase what you just said there: "you won't see many benefits by just going to a smaller node, you need the architecture to complement it".
    But, you don't have to do that either. As pointed out earlier, Radeon VII was mostly just a die shrink of Vega 64 with more memory. It didn't yield a big difference, but it did yield a difference. AMD didn't have to do much because the whole design basically just scaled down by 50%.
    By that logic, that's like saying the foundation is more important than the skyscraper sitting on top of it. The foundation is obviously crucial because without it, you don't get a skyscraper (at least not for long). But to say the foundation is everything, as though that's the reason the tower is impressive, is nonsense. A foundation is nothing more than just a bunch of concrete and piles. What you build on top of it is the thing everyone cares about, and determines if the building will actually fulfill its purposes.
    Processors are no different. It doesn't matter how many transistors you have if their configuration is crap. It doesn't matter if smaller transistors can offer higher efficiency if the architecture itself isn't efficient.
    And no, I'm not saying the RDNA architecture is crap or inefficient, my point is you are the one who noticed that Turing seems to perform very competitively despite using larger transistors. How could the silicon possibly be the sole reason for that?
    My point of bringing up software was to show that the performance doesn't change much regardless of which drivers you use, suggesting that the architecture is primarily what determines the hardware's capabilities.
    Silicon quality really only determines how hard the architecture can be pushed. You're not wrong that GloFo's transistor quality is sub-par, but that really only affected clock speeds. And even then, only for their CPUs - to my knowledge, AMD has never used GloFo for GPUs. So, if you're going to compare AMD's GPU silicon quality vs Nvidia's, well, it's pretty much the same quality. Y'know why Nvidia can clock higher without being such a major power hog? Say with me:
    It's the architecture.
    How you arrange the transistors can affect their clock speeds.
     
  10. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    Fair point, but i just want to say i never even thought that architecture doesn't matter and all that matters is silicon, never implied that, because it's obviously not true. And what i said earlier yes to see the benefits of a node shrink you need also to complement it with the architecture, but it's just in order to take advantage of the silicon you use, the raw power, the potential resides in those transistor. It's not like every generation they think up an architecture and they're always able to came up with something that improves they're performance, why no go directly to that then? There's has to be certain conditions probably, to make them think of something better, and part of it is probably regarding silicon improvement, in terms of consumption and efficiency. Anyway i think GloFo was used until recently, wikipedia says Polaris first Vega entire lineup is based off GloFo's 14nm, while Vega 7 is made by TSMC
     

  11. UnrealGaming

    UnrealGaming Ancient Guru

    Messages:
    3,454
    Likes Received:
    495
    GPU:
    -
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    @oxidized : Go write yourself nice game that will be using heavily FP16 shading. You'll know Pascal to Turing difference then. Vega/Turing will not suffer where Pascal will.
    Your sentiment that Turing is just Pascal on steroids + some optimizations is as far from reality as it can be. nVidia did more work there than ever before.

    For Hz/fps part:
    You are wrong there. Especially where you describe that great fps is better than great refresh rate. You did describe issue people had at times before LCDs moved from 60Hz up. And then before AdaptiveSync came to be.
    1st situation, 60Hz and need for high fps in game comes from fact that 60Hz screen can display new frame only every 16.7ms. But game GPU can render frame at any moment in that time interval and latest frame will be displayed. This is basically frame pacing issue where content displayed happens to have different delay from one rendered. => Stutter
    If you added old 100Hz PS/2 mouse or 125Hz USB mouse, you had wild spikes in: input to => processed image to => displayed image lag. Where high fps compensated for biggest variable.
    2nd situation, having 120/144Hz screen and fps somewhere around. That's no longer issue of having bad frame pacing. That is turning point in seeing more refreshes on screen. And therefore having more content to react to. It is no longer about input to display lag as it is about magnitude better object tracking.

    Now part you ignored:
    144Hz screen with adaptive sync can display new image every 6.9ms at best. This means that if you are running game at 140fps in average but actual frametimes vary over time, any frametime shorter than 6.9ms will have to wait this amount of time as it had to 60Hz screen (where wait time could be up to ~16.6ms).
    Using 240Hz screen gets you to 4.2ms wait time between two closest consecutive images can be displayed. Therefore if you have game where frametimes often move up and down, screen deals with frame pacing issues.
    Screens do not deal with fps, they deal with frametimes. You can have 120fps in average on 144Hz screen and have bad pacing issues, just because frametimes were all over the place.

    I have known people who were advocating that 60Hz is enough. Till they got 120/144Hz screen and used it for a while. Then upon seeing how their old 60Hz operate again, thei changed their mind. Same goes for 240Hz and will ho for 480Hz. I hope that one day we'll get oled screens with response time of 0.1ms and that they will display image as soon as GPU can deliver it.

    Which brings us to your last misconception about "human" eye capability. Your eyes do not have refresh rate. They are analog. And their downside is persistence, which may result in something you may call motion blur.
    Except that on screen it is not blur, it is set of images over time where newest is clearest and oldest is least pronounced. It is not clear smooth path of object moving from point A to point B on screen. No, you can see object being at point B and remember that it was at point A. Higher the achievable refresh rate more iterations of object you can see between points A and B.
    You can get away with illusion of analog image by game having motion blur. Feels smoother, but it is not. And it will not help you with object tracking. Quite opposite is true.
    You will end up having insufficient refresh rate till refresh is high enough that it would be able to move fast objects by 1 pixel per frame. That's final frontier of refresh rate.

    Have you ever used https://www.testufo.com/ ?
    Have you ever seen it on 240Hz screen?

    Set up Count of Ufos (number of lines where ufos are moving) in way that you see 30fps line too.
    Set up 3840pixels per second to simulate speed at which you make 180° turn in game with 90° FOV.
    Don't look at ufos, no reason to track them. Look at big text READY under them. Now think about what your eyes deliver yo your brain and what you perceive. Because that's what is your perception on large portion of screen that you do not actively track since active area with sufficient clarity does not cover your whole screen.

    Feel free to remove 30fps ufo as you realizes this refresh is barely sufficient to say direction 2 ufos per line move to unless you actively look at them. (as you are looking at screen which has 1920 pixels and there are 2 ufos per line moving at 256 pixels per frame and their actual distance is only 960). 240Hz screen has them moving at 128pixels per frame and it is still far from comfortable.
    Check same thing now with lowest refresh being 60fps. Still sucks, right?

    Remove sucking 60fps ufo line. Now you are left with 120fps line. It looks better, but if you have 240fps line above it, you know the difference. On my screen under set up above, ufo moves 16 pixels per frame. From high fps recording, I know that my screen has minor pixel persistence and there can be seen a bit of ghosting from previous frame in high contrast situations like this UFO test. But I see around 3 distinct ufos overlayed over each other when I am not tracking them. Or just one when I track them. This would put my eye/brain to ~12ms persistence where newer the information, more pronounced it gets.

    And that's where it gets to... Tracking. Ability to track objects you focus on gets better with increased Hz matched with appropriate fps. And same applies to perception of objects you do not actively track.
    Having 60Hz screen and 1000fps will not make you any better at tracking those UFOs, they are already synchronized to have minimal frame time variance. They are what you can call best case scenario. As they provide great image clarity and stable rate of image delivery. Games are always worse than that.
     
  13. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    You are convinced you can see difference between 6.9ms and 4.2ms frametime, that's the problem, you can't! You just think you can, and by using stupid testufo, you're forcing yourself into thinking that. Eyes are "analog" as you call them, but they have a limit, and the brain has a limit too, and nobody is able to benefit from 240Hz, nobody. Enough doesn't exist, anyone can play with no problems on a 60Hz, and same goes for the rest, the only problem with all of it is that a some point using a 120/144Hz will become your new standard, and this will become more important than actual display quality, but still cost a lot, and give you marginal benefits in game, not to mention a that point your very minimum acceptable framerate won't be 50 or 60 fps anymore, but it'll be higher, and for that you'll need to spend more on your hardware, just because the monitor "needs" more.
     
  14. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    @oxidized : who cares about cost. Your brain is analog device, so are your eyes. Greatest property of brain is ability to derive distances by time. Unless brain has some malfunction or no experience, someone can throw you baseball and you can catch it even if it is no longer in field of your vision as it makes contact with your hand.
    You can even close your eyes as it is in middle of flight and you'll still put hand in correct place and expect contact at correct time.

    But that works because you have input information flowing to your brain at steady rate. Your poor gaming scenario of 60Hz stutter that needs 300fps to reduce it is exactly why you need high refresh rate and adaptive sync. It does not matter if you actively see difference between 1ms and 10ms frametime. What matters is that brain does calculation anyway and if you have variable input to display lag (micro)stutter, your ability to move mouse adequately will be hindered.
    Playing 60fps average on 144Hz Adaptive sync screen delivers much higher accuracy than playing 300fps on 60Hz screen. Because 144Hz screen has minimal frametime low enough that no frame should wait.

    If you want to play at around 120~144 fps, you should be sure that actual frametimes are within range of your screen. And that's best achieved by having screen with higher adaptive refresh rate range.
    2nd best option is RTSS with frametime limiter instead of fps limiter and setting it in way that it matches your maximum frametime (worst fps). This removes wild spikes in intense scenes while playing uncapped.

    And that would bring us to price of HW you liked so much. Optimally your HW should never produce frame at lower than expected rate. 120fps in average is nothing good if it is all over place. 100fps stable on HW which can do much more is much better. Want it good, pay.

    Your approach of throwing dirt on something you never experienced is sad. But hey, I expect that Warframe should pull high fps even on your HW. Get someone to lend you 240 adaptive sync screen. Use RTSS and set fps limit that it fps has no variations fine tune frametime limit. Enjoy for once 235fps on screen that can display it without wild frametime lag.
     
  15. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G

    As always from you, mirror climbing and no facts, only your beliefs you always try and present as fact but they're not. I can play on 60Hz no problem, there's no actual difference from me or someone with 144Hz monitor, it's just something that's close to placebo effect, i prefer to spend my money on definition and quality instead of refresh rate you'll get used to and will be the new standard, and you'll only notice the difference when put in front of a lower refresh rate monitor and not otherwise. You have no idea how Team Fortress 2 worked, i have 6K hrs and i can confirm what i was saying earlier, you don't understand that theory doesn't always explain everything, 60Fps on 60Hz were horrible even if they were super steady.
     
    Last edited: Jul 6, 2019

  16. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Your issue is that you are right in 60fps + 60Hz is generally bad unless you can keep good frame pacing. Supplementing framepacing by pushing 300fps is as stupid as can go.
    Take that 300 fps, reduce it to 235fps and display it on 240Hz screen with adaptive sync.
    It is not theory, that's reality.

    Theory is flawless calculation of frametime range, maximal variation and input to display lag variation. That tells you where each solution falls. Things you described are trash. And you defend them at all cost even at too expensive HW. People buy twice as expensive GPU than 240Hz screen every 2 years. And monitor's functionality for given use case will not become worthless for 5+ years as long as one gets it when technology is reasonably new.
    I got reasonably good 120Hz screen when they were new. Flashed firmware to 144Hz in some years and got 240Hz screen when they became reasonably priced again.

    I have no need for beliefs, I did experience those steps. When someone comes who got product and says that he/she does not think it was worth the price or made not good enough difference. They are to be taken into consideration. When you speak from place of no experience or knowledge, you can't be taken seriously.
     
  17. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    And how do you know i haven't experienced those myself? The only thing i haven't experience is the 240Hz monitor BS, and my opinion is based on many opinions on the internet, and on the logic where difference gets thinner the more you increase it, as i said 60 to 120 the difference is huge, 120 to 165 is like going from 60 to 75hz, if not less so that's how my logic works. Theories aren't flawless, never, actually they're always flawed, you just don't factor your body isn't able to notice those things. How game look and feel depends on the game, 300fps on those 60Hz was the target to hit and the main reason was to keep as far as possible from the refresh rate of the monitor and in case of dips, still to have the most possible fps. "The things you described are trash" Keep climbing dude.
     
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Do you even understand basics of movement? Do you know what degree is?

    20 years ago we used freaking ~14'' 60~95Hz screens. Get dimensions yourself. Now we are using 24'' and larger screens.

    If you sit same 80cm from screen, and game has 90° FOV, then you turn by 45° to side in course of 1 second. Tell me how many cm per screen refresh objects move on 14, 24, 32, ... inch screen if you are on same 60Hz?
    144Hz on 24'' screen barely provides same tracking ease as does ~14'' screen @60Hz, because same angular speed results in much larger distance traveled on actual screen per each refresh. (Making it harder to track objects.)
    Displaying technology was gaming downgrade with introduction of 60Hz LCDs. Because CRT's used for gaming were usually much smaller and had 75/85/... Hz refresh rate.
    From twitch games perspective, 144Hz is barely return to what we had back in the days of CRTs.
     
  19. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    Screen size has nothing to do with this, it's not about tracking or not tracking stuff, it's what you can and cannot see and what your brain can and cannot elaborate.
     
  20. Goiur

    Goiur Maha Guru

    Messages:
    1,340
    Likes Received:
    630
    GPU:
    ASUS TUF RTX 4080
    I may be some kind of weirdo, but i can tell the difference from 60 to 144hz moving the mouse around and fps gameplay looks completely different too. I block games at 60, 90 or 144 fps depending on game genre. If you need to convince yourself, or maybe you are just lucky that you cant tell the difference, so you dont buy an expensive 140-240hz monitor, good for you, but you cant say there is no difference.
     
    Order_66 likes this.

Share This Page