Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 30, 2021.
The PCIe 4.0 implementation could have an effect, though personally I highly doubt it.
Ah, I somehow did not insert them, Added !
I don't know if using intel or amd makes any difference,but will your face be red when it turns out it doesn't.
framebuffer not large enough to justify SAM before now. plus AMD looking to get any edge over nvidia. memory bandwidth also being a factor, now that we are smashing through 1TB/sec.
@Hilbert Hagedoorn : Did you by chance record power draw data during tests too?
It may be interesting to see correlation between change in fps and change in power draw.
Or at least GPU utilization. Does that big fps difference in AC: Valhalla results in increased power draw/GPU utilization. Or is it practically "free" performance?
And one extra thing to consider. Since AC: Valhalla shows big difference. Would it be possible to take just one card on each side AMD/nV and test it with PCIe 3.0 SAM OFF/ON?
Could be interesting to see if PCIe 3.0 SAM ON delivers better performance than PCIe 4.0 SAM OFF.
I'd be interested to see any change in power draw too, good point @Fox2232.
Valhalla doesn't really seem to cause the GPU to draw that much power though it'd be interesting to see a comparison since it could still be a slight increase in power draw as ReBAR looks to be a bit heavier on the GPU though mostly for the few games seeing higher than average gains and whatever is going on with Valhalla since it's ~20% or so and the other big gainers are at up to 10% gains hah.
Forza 4, Borderlands 3 and I think Red Dead Redemption 2 in addition to Valhalla.
(Guess the whitelist NVIDIA has is a handy way to quickly see which titles scale really well although then there's Watch_Dogs Legion and stuttering as a bit of a exception.)
This is an interesting article comparing the 10900K to 5950X with Cyberpunk.
And single rank vs dual rank ram.
Intel respond better to higher speed ram 'in this game'.
It would be interesting to see the results with other games and with/without REBAR.
Its possible there is nothing more to give so it wont make a difference, but what if ...
intels always responds a little better to frequency and amd to timings
it has nothing to do with whta he said tho
Yeah his comment was not well said, it deserved a rebuttal.
it was the otherway around few generations ago.
Thanks for the interesting article.
For the pages where the NVIDIA and AMD cards have separate graphs, it would be nice if at least they'd have the same scale, so it would be possible to visually compare performance of these cards.
resiseable bar doesn't make any more difference than i.e. hags,bar one game where it's 10-15% best case scenario
so best case scenario of best case scenario is 15%
it looks like much ado about nothing, you're getting 1 fps when you're below that critical 60 fps barrier,in a couple of whitelisted games where it doesn't reduce performance.
7 games with 6800xt at 1440p - 2.9% difference on average
without valhalla it's 1.8% on average
so much hype for nothing.and you can't use hags with sam enabled I hear,so you're trading 1% performance gain for 2-3%
now that is a breakthrough.
Rebar has never been about "just the PCIe bus". It's never been a simple matter of setting the bios up for Rebar at all. Had it been that way, that simple, we'd have been doing it for years. What it does is allow for the CPU to access a lot more than 256MB at a time from the onboard GPU ram--across the PCIe bus (PCIe4, the faster the better.) Not only the drivers have to be made Rebar-aware, they have to also use Rebar as it is intended. That takes a lot more than simply rejigging the system bios settings--the entire GPU hardware, GPU bios, and the drivers have to be made to use the rebar--or you get little to nothing from it. It took nVidia ~18 months after AMD to ship its first PCIe4 GPUs, and unlike AMD's SAM, Rebar was never intended by nVidia as a feature in RTX-3k. Additionally, the purpose of a memory cache is to put the most used data where the CPU can get to it the quickest, and that's why AMD built in what it calls the Infinity Cache--in its RX-6k hardware. nVidia has no equivalent atm. I hope that nVidia will copy AMD in this regard and that the two companies could agree on a D3d and/or Vulkan API standard, because in the future things could really pop out of the concept.
The people who once said that AMD was simply shooting the breeze because "anybody" could do Rebar by just changing the bios settings have been proven wrong--again--but there's one more element that will come into play. Game engines themselves will need to be written to address > 256MBs of data at a time from the GPU across the PCIe 4bus (or faster), or from the GPU to the CPU--and then we could very well see much more substantial improvements in performance, imo.
Basically this plus things like HAGS, mean that the GPU / CPU subsystem gets rid of a lot of bottlenecks, and they can both access memory space better, including NVMe I/O etc. They seem insignificant now, but they will for certain matter by the mid of this console generation.
I cannot see, for example, how the MS Velocity architecture could even work properly without HAGS +ReBAR.
i think guru3d got a bad silicon lottery sample, mine was faster out of the box and overclocked much higher
this doesn't explain the difference.
Thanks for this expended test review. Makes me consider reinstalling Assassins Creed: Valhalla.
Not a huge performance impact it seems, but in the era of raytracing every little bits of performance counts, glad we get a potential source of bottleneck out of the way.
Cheers for actually doing a comparable benchmark in 4k. Was annoying seeing benchmarks only in 1080p.