Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 28, 2018.
Games usually need more GHZ than cores, with a few exceptions.
Well considering the 7700k is 4 cores at 4.5ghz, and my 8700k is 6 cores at 4.7ghz (all cores turbo) how much does the bottleneck go due to the extra cores... is there anyone who has tested, now more and more people are getting 6/8 core systems be interesting to know how much of a jump from 4 cores alone that does
I think a bit of perspective is needed with these results assuming they are the 2080. If they are accurate then i think Nvidia have done a pretty amazing job to hit that level of performance when only using half of the overall die (shader). I don't think looking the current bench marking suite is the way to look and judge if these cards are worth it.
Had they chosen to make one massive set of shader cores then i'm sure it would have delivered massive benchmark performance numbers. But then we would loose one of the biggest improvements in graphics fidelity with ray tracing and an even bigger deal in my opinion of losing DLSS. The latter being something i'm not sure AMD will have an answer too, at least not anytime soon.
TLDR; Wouldn't base a decision on whether or not to purchase the new cards based on the older benchmarks.You should buy them if you want improved graphics fidelity and access to DLSS which ill reserve judgement on until we see some reviews of benchmarks, but it looks impressive.
Right now, not having ray tracing is a non issue because there's zero games. And "if" there's adoption of this tech, these cards are long gone obsolete. Nvidia invested in this, and now they are transfering the cost to potential buyers. Which I'm not even arguing that, but this time I don't think it's a worthy upgrade from what I have now, and what I play.
Is the game the bottleneck or Nvidia's driver? Remember Titan V/Turing both utilize non-superscalar execution within the SM and warp threads can now function with independent scheduling - there is a much bigger reliance on thread level parallelism with these architectures than previous.
Really depends on the games you play. From what I've seen Ubisoft games like AC:Origins are starting to use more than 4 threads with great extension.
going just by the name I think its real but in what form was the system who knows
This is misleading, no offense.
Those were tested on a 7900X, the "supposed" 2080 Benchmark was done on a 7740X, overclocked or not we don't know, meanwhile:
The highest score with a 7740X overclocked to 5.1GHz (+600MHz) and a 1080 Ti overclocked was 9785
The highest score with a 7900X overclocked past 5GHz and a 1080 Ti overclocked was 14251
Your 1080Ti link with a 7900X overclocked 300MHz and a 1080 Ti overclocked was 11146
Regardless it's also 4 Cores vs 10 of the 7900X.
To finish the best score for the 8700K overclocked to 6.9GHZ with a 1080Ti is 13345
Can't really take the benchmark serious even if true with that CPU.
I tested with my TI and compared the fps on test 1 and 2, and they are the same. And I have a 4770k @ 4.4. We should not compare overall score, but gpu score.
A CPU can bottleneck a GPU performance regardless of the benchmark saying it's GPU score, 3DMark benchmarks usually have a separate GPU/CPU benchmark and a CPU only benchmark.
Some single thread MMOs like Guild Wars 2 and ESO will not let the GPU usage go to 99/100% with FPS unlocked because the CPU core speed is not enough, if i overclock my 2700X and force the clock the GPU usage goes up and the game performs better, otherwise it remains at 60% usage most of the time.
Don't even need to overclock, just using Ryzen Power Plan results in boost clocks being lower vs Balanced Plan, and with it GPU usage is also lower.
EDIT: Example of GPU score
11188 GPU score with 2152MHz core (7740X)
12832 GPU score with 2141MHz core (7900X)
Completely agree, especially as your already on a 1080ti. Not sure i conveyed my point very well but that's what i was trying to say. If you only want to play current games without Ray Tracing (when available) or DLSS then there is no reason to upgrade. You have to want those two features or be in the market for a new card because your using old kit.
So basically when i buy GTX 1080Ti for 499 in my local shop i can do pretty much same as with GTX2080 for 699
2025 mhz?! the boost clocks are 1800 on the FE edition which is Overclocked already, Haha the most retarded generation EVER!
Well, technically the 1080TI's boost clock is also specified as 1582MHz you know?
But we all know that those "max" boost clocks are nothing near daily usage in the average guru's rig.
If those results are true for a RTX2080 i have just checked my timespy subs and it is slower than a 1080Ti at the same clocks , kind of disappointing with those results. Take it as you will , but i think the RTX2080Ti won't be anything special over a Pascal 1080Ti , add maybe 10% ~25% more performance for ~$500US Dollars more wish maybe it will translate into 10fps gains in todays game lol with no ray tracing . Nothing to write home about ...
Do we know anything about dx12 performance yet? Pascal did notoriously worse in dx11 as I recall.
DX12 performance is entirely on the developer. Turing features a number of changes that allows for better thread management but if the developer doesn't focus on the architecture's advancements then it doesn't matter.
I don't know what you're talking about when you say Pascal did notoriously worse in DX11.
You have full OC??? My pc
Core i7-8700K 4.9Ghz
GtX 1070 1960/8700
crap RTX 2080
But... but... but... look at the reflections in the people's eyes!! I can TOTALLY see several enemies just looking at one of their eyes!!!