Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 16, 2018.
Fox2232 will never agree, he just trying to justify his Ryzen buy.
Ignore his spam
Yes, he won't accept the truth.
Don't get me wrong Ryzen is a great CPU, especially for the price.
But people who say they perform the same or it's only a small difference in games have tunnel vision.
It wouldn't surprise me if anything that came out in the Core2Duo age runs better on Intels, coded optimistically with 'next year's 5GHz dual core' in mind at the time
Don't lie, we both know everything was AMD optimized during that age, games ran up to 10 times faster on AMD cpus, literally.
Use of old intel optimized engine and putting it into state close to unplayable to show difference on current 5GHz intel's unlocked CPU is same scenario as if I took any modern, well threaded game and streamed it with high quality settings for encoder with 1080p+.
Only difference there is that you have to cherry pick your old intel optimized game which can be artificially broken. And anyone can pick any random game for streaming.
Funny part here is that you are building scenario on: "How to make argument by not utilizing $360 CPU you paid for."
While that other scenario is: "How anyone actually used their $300 CPU."
But I wonder about that scenario of yours. Since Ryzen does 40fps there, how does any Sandy CPU @4.5GHz? 36fps? Which inte's CPUs have it actually playable?
Wait, I know. There was actually mod for Oblivion which did not increase CPU requirements. One for improving fps due to developers not fixing their code and another for reducing/removing stutter on some systems.
And that Drawcalls-Overload you actually have in that scenario. It's Oblivion, with every light source, you are multiplying number of draw calls for entire scene. So mod which limits number of light sources to 8 makes it playable even on that older hardware.
And sad reality here is that only mod Oblivion ever really needed was not for visuals, but to remove level scaling and allow for monsters to have strength according to their type. I bet this one does not hurt performance.
And I did not wrote that they perform same. There is real ~20% difference in maximum achievable fps for anything that's not properly threaded. Not double fps situation. That requires that intel dirty compiler trick.
Gents, we're all friends here, sharing the same passion for technology right? Let's keep things friendly and civil okay.
Suuree lets just forget about why this tool exists https://github.com/jimenezrick/patch-AuthenticAMD
Im not sure you want to be blaming the cpu,when the test is rigged.
also bethesda is notorious for not enabling cpu optimizations, doesn't anyone remember what launch day skyrim was like, it was atrocious even on intel cpus
Skylake X does underperform even worse in Bethesda games, so gentlmen, quit your nonsense.
Or doesn't Skylake-X use the GenuineIntel vendor string?
The SkylakeX line up has someting in common with Ryzen. They are both optimized for bandwidth and throughput, they both are aimed mainly at servers. Due to that, they have higher cache, cores and memory latency. So you can expect performance regression up to 60% in certain titles.
Skyrim didn't have any problems with Intel cpus, it ran bad on Bulldozer and its successor.
You seem to be stuck on the oblivion comparison.
Same principle applies for fallout 4.
Crysis 1 is another scenario where performance difference hits near 2x faster.
I'm very familiar with crysis 1/wars as I have played around a lot with cryengine and lua functions.
There's no 'dirty' compiler trick running there.
Crysis has a limited threading ability with game logic.
My point is that there are many game titles where performance gets much greater than >20%, mainly games that aren't threaded well and are not GPU limited.
I'll give another example I had personal experience with. I have a buddy who I play a lot of Borderlands 2 with, UE3 game.
He had an old am3 system that died, we built a 1700x system with 3200 samsung IC memory, clocked at 4ghz with tight secondary/tertiary timings.
A specific map area called thousand cuts is huge and tanks performance.
His system was 50~fps to my 90~ fps, don't recall exact numbers but it was nearly double.
There are no shady compiler practices going on there either.
Yes performance sucked at launch for everyone, it took a modder to incorporate a fix into the game to increase performance.
But we can't say every single game is rigged that has a stark difference in performance can we?
Especially newer titles.
You are right there.
Guy with i7 6700k @ 4.6ghz could not get much more than 40fps there and GTX 1060 could not be fed enough to pass 35% utilization.
Maybe reason why you did not have same 40fps in same spot is not your magical double fps CPU, but something different in options.
That's a different spot on a different map called Southern Shelf, so we do not know if he is right nor how much Ryzen would perform there.
That's not the same area.
That's an early beginning area after starting the game.
Secondly, we stood at the same spot with the same settings applied.
I do not think it really matters. Unplayable is unplayable. It is like following person with 4690k @4.7 GHz and GTX1080.
This particular game has it apparently all over the place.
Last Borderlands I got was 1st one. It was running reasonably well even on notebook w/ C2D @2.73GHz. But I do not buy broken games, and PhysX games are rarely anything else than broken.
Otherwise I would tell you how Ryzen does there. But I expect similarly broken fps.
@Agent-A01 posted quite a few things in this thread. And I went every time to confirm his information. Every time, it ended up not so well for his statements.
I mean, he makes it look like he is completely unaffected by anything that affects everyone in both AMD's and intel's camp. And then it looks like he believes that there is 2x per core performance on intel's side.
IMHO, Agent is low fps insensitive, does not remember things clearly or he is over-exaggerating to point it looks like he is making things up. Not sure which of those or combination is cause of all this nonsense, but at least one is.
i've just tested those settings on an old FX6350 @ default 4.2 with a 980gtx and it does the same even though i usually get 75fps @75hz in BL2 that Southern Shelf area always goes to 40fps even with lowest settings it does it, soon as you turn away from that area it goes back and the game is playable np. I thought BL2 was DX11 but it's DX9 right.
Your video disproves nothing.
Everyone who plays that game in high-level play disables physx totally, which improves performance a lot.
The video you sent shows physx disabled but with poor performance.
That particular spot I get way more fps on max settings which shows there is another issue with his setup.
I've played that game through multiple setups to know that it's a particular issue on his setup.
I'm not sure you know what insensitive means; that's clearly the exact opposite of what I am; low fps is intolerable to me.
My comparison 50 vs 90 was a real world test with the same settings. No exaggeration there.
I think a low 90fps is ridiculous on an old game.
It should be much higher than that.
Lastly, you are digging a bigger hole by putting words in my mouth.
Not once did i say "believes that there is 2x per core performance on intel's side."
Idk how many times I have to repeat that you say there is only 20% difference in games, which I've repeatedly said that the difference can be much higher in older titles and other CPU limited titles.
Yet you have an inane inability to not take facts and always seem to pull out excuses of intel's compiler practices from decade ago or try to point the finger at me saying i'm pulling information out of my ass... lol
Here's another video, as low as 33~% difference up to 62% difference in one spot that i saw. 183 vs 86.
There's no shady intel compiler going on here either so that is not an excuse you can use
Anyways, I'm not making wild exaggerations that intel is twice as fast, just proving that the difference is more than 20% that you claim.
You seem to have tunnel vision and/or are blinded by love for AMD since you won't accept anything else that proves my point.
I won't continue beating the horse since nobody can prove you wrong with your hat full of excuses
"His system was 50~fps to my 90~ fps, don't recall exact numbers but it was nearly double."
To that newest video... Have you noticed that those systems are not by any means comparable on SW level?
Read RAM/MEM information there. This means person used different OS installation and likely different settings.
Then at that 183fps moment. Have you realized that it is not benchmark, but manual runs and they are even delayed from each other?
Or did you see frametime variance graph. And stutter on 8700K video?
And what's chance that this capture was done via Quicksync on intel platform?
From what can be seen on that video, it should not be called benchmark. And your non-comparable 50 to 90fps has same weight.
Since 2/3 games you listed are likely affected by the rigged compiler (anything pre 2008 is guaranteed),and I wouldn't be surprised if even newer games like fallout 4 are affected aswell, you seem to think people update their software, alot of the time they dont.
If your getting 50% less frames on a ryzen cpu you have a software problem, even in old games, ryzen's ST perf isnt low enough and its memory latency isn't high enough to account for that large of a difference,
I'd bet 5$ that almost all of your examples would run significantly better if they were recompiled with something half modern.
Borderlands 2 is a really bad example because most of the trouble with it stems from the poor or I'd even go as far as to say, broken PhysX implementation, I mean, look at this thread for example https://forums.geforce.com/default/...e-fix-borderlands-2-pre-sequel-gtx-970-physx/ people there having haswell i7 chips and high end maxwell cards, borderlands 2 should be no problem, yet have exactly the same issues as previously mentioned in this thread.
Crysis 1 is also a bit of a difficult game for comparison usage given the need to have to mess around getting the 64bit version to run and extensive configuration. That video doesn't state any type of configuration, is it using 32bit? is it using patched exe etc? As such those results are dubious at best. It shouldn't have that low of a result but I also know from my own personal experience that Crysis, on my hardware behaves rather inconsistently.
I do have a possible reason why performance in some cases for older games performing poorly on Ryzen when paired with modern NV GPUs. Now this is purely speculative but I do recall, when Ryzen launched, that performance paired with nVidia GPUs wasn't very good, and it took several driver releases (in addition to microcode updates, but those were for the latency issues) before we saw normalised performance. What is to say that nVidia haven't updated the driverstack to support older titles on Ryzen and so just relying on brute force? This will of course need some research if it is indeed a reason as to why some older games aren't running well on Ryzen, at least with nVidia GPUs. This would also require tests with AMD GPUs to see if these extreme cases have the same margin of performance differences.
Whatever the case, those are extreme examples with questionable results, we all know Ryzen performs similarly to haswell in terms of IPC so we'd really expect coffee lake to be upto 20% faster on average in IPC anyway.
(Edit: Looking more closely at that Crysis benchmark video, noticing the RAM usage according to the overlay information shown within the video, it appears that it is the 32bit version of the game, the RAM usage remains under 4GB. Now it could just be that it is the 64bit fixed version and it's just coincidentally using less than 4GB but nevertheless, there's a lack of important information)
They may run better with optimized compiler but we'll never know.
I know Crysis 1 for a fact does not nerf performance for not having genuine intel flag set.
Crysis 3 shows a performance differential of anywhere from 25-30% as well at 720P.
Also, take fortnite for an example. It's consistently 30-40% faster than the 2700x
That video shows the Intel CPUs running overclocked at 4.8 GHz on all cores (default all-core turbo for the 8700K and 8600K is 4.3 GHz and 4.1 GHz, respectively) while the AMD CPUs are running at stock. Not exactly a fair comparison.