why do you not use BattleField 1 as a game benchmark? It uses cpu cores to their deaths. Most people I talk to that have a high end GPU, like 1080ti say their cpus, along with myn, are always 90-100% usage. Would be fun to see how this rips bf1.
I assume the question is, why I do not ... BF1 is fairly difficult to use as reliable test. If I create a 30 second bench run then in one test FPS is lower and in the other higher by margins up-to 5%. This is mainly due to random explosions like grenades and artillery hitting close-by. Once that happens the FPS drops significantly and is affecting the FPS measurement fairly extensive. So the main problem is, the explosions are random and they can drop the FPS big-time. I am still looking at some other more friendly scenes to use, but then again seeking a more smooth scene to use just for FPS results defeats the purpose of objective benchmarking. And then lastly EA has a tendency of releasing new game patches that screw up all results with faster/slower FPSes. Graphics cards you can fairly quickly swap out an test, complete test-systems with CPUs however eats away days of time. This all makes BF1 very time consuming and complicated to test.
Touché my friend. I guess i forgot about that game not having a friendly benchmarker. I just wonder if BF1 would use all the cores, and at what percentage they are at.
BF1 does thread well, but does not utilize 100% of the cores by far (from my memory recollection). I can do a quick video recording of AB showing that if you like, I guess that is interesting to see for many. I still have the 12/24 core part installed.
If you have the time. That would be interesting to see. Maybe that and another game or 2 that utilize more cores, or use a crap load of cpu power. Just for giggles lol.
Well, if i recall correctly when the first 3 and 4 core cpus came out, hardly any games utilized more then 2 cores. So when we saw the first games use 4 cores or more it was pretty cool. But we saw that within a year or 2 of the first few true 4cores. We have had 6-10 core cpus for awhile now, and with the mainstream i7 coffee lake coming out with a 6 core part. i think we will have many more games using more cores and using less core usage to make games more fluid. Cause having high core usage really starts causing them stutters. Thats why im interested in a game like battlefield how the extra cores actually do compaired to say a 4 core i7. even a high mhz i7 7700k at say 5ghz.
Anyone seen Eypc score (CB 7252) over at HWBOT for Cinebench R15, crazy fast score. http://hwbot.org/submission/3620709_blueleader_cinebench___r15_2x_epyc_7601_7252_cb
For 7700k someone might consider that CPU not being worth it because coming EOL for Socket/chipset and for it already being close to 100% utilization in some current games.
Yeah... and that was 6 years ago. Back then, people were right - that wasn't a wise choice for a gaming CPU. By today's standards, it's just adequate (when you consider the IPC is worse than modern CPUs). Sure - nobody is arguing you'll have a bad gaming experience on TR, but to get TR for gaming as the #1 priority is a poor decision unless you have money to burn. I assure you, you could go back to dual channel and not notice the difference. Depending what framerate and resolution you expect to play at, I bet you wouldn't notice the difference between quad and single channel either. The extra memory channels are really only useful in highly parallel tasks, which games are not. If you want a TR then great - AMD could sure use your business and marketshare. But, I think you'd be better off with a Ryzen 7.
There won't be an 8 core threadripper. The threadripper parts are called Ryzen Threadripper....and are basically just a higher vore count version of the ryzen cpus just on a different chipset. There is no way amd would cannibalise the sales of their already new ryzen cpus with an almost identical 8 core threadripper part.
Yes there will be; that's what the 1900X is. It doesn't cannibalize the AM4 products, for several reasons: 1. If it did, then you could say the same about i9s and Xeons. 2. The CPU+X399 motherboard is significantly more expensive than an 1800X+X370 board. When you consider the price difference of the 1700+B350 board, that separates them even further. 3. The 1900X will likely operate at a lower voltage and have better thermals, but, it will also likely have a worse idle wattage. 4. The 1900X has quad-channel memory support; the 1800X only has dual. 5. The 1900X has a much greater L3 cache. 6. The 1900X has 60 on-chip PCIe lanes; the 1800X only has 16. 7. The 1800X can be used in ITX builds; I'm not sure if we're even going to see micro ATX for socket TR4.
Umm, AMD already announced an 8-core part, the 1900X. http://www.guru3d.com/articles-pages/tech-preview-ryzen-threadripper-1900x-1920x-and-1050x,1.html
LOL, even Nvidia seems to be happy that AMD is back! http://www.tweaktown.com/news/58728/nvidia-welcome-back-amd-threadripper-launch/index.html
That is kind of funny, though not too surprising. Nvidia seemed to get along relatively well with AMD's CPU division, albeit a little sour once AMD bought ATI (being the death of the nForce chipsets). AMD's CPUs help Nvidia's sales, so ultimately they're more happy about their success than worried. On a side note, considering they're direct competitors with AMD's graphics, Nvidia surprisingly doesn't seem to start many feuds, legal or otherwise. Nvidia is a very arrogant and selfish company, but they're not hostile against AMD. However... I can't say the opposite is true - AMD seems very salty and bitter about Nvidia. Meanwhile, I'm aware Nvidia does not get along that well with Intel and vise versa. Nvidia created the ARM-based Tegra series because Intel wouldn't allow them to use the x86 license. Many modern Intel boards support Crossfire, but not SLI. Both companies have tried suing each other. Nvidia attempts to steal Intel's server marketshare, and Intel attempted to steal Nvidia's mobile marketshare.
Here you go: Battlefield 1 running on GTX 1080 @ 2560x1440 with Ryzen Threadripper 1920X 12c/24t to demonstrate the thread distribution, allocation and utilization. Btw the fairly low FPS is due to the video encoder recording at 100 Mbps.