Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 10, 2020.
can we have screenshots of that
Seeing 40% CPU utilization of 16C/32T CPU. I would like to see 1080p resolution and framerate scaling for:
4C/4T ; 4C/8T ; 6C/6T ; 6C/12T ; 8C/8T ; 8C/16T
@cucaulay malkin : https://forums.guru3d.com/threads/n...ncl-cyberpunk-2077.435605/page-3#post-5864704
(As example.) But it is as always pointless discussion. Some see differences, some can swear on grave of their beloved that there is none.
Seems the game does not benefit much higher core count cpus.
Even the top recommended is 4790/3600.
A bit surprised.
Nice one, if i run into trouble further in i'll take a look at the Cascaded Shadows Resolution.
5800x can't keep a consistent 60
5600x and 3800xt do very poorly
Cheers for this, you see folks it's a cpu hog.
it's possible it gets patters confused
but I thought you were gonna point out some bigger difference
for 2x performance uplift form pic1 to pic2 I'll take that and won't complain
9900k falls behind too with 2666 ram
I suspect this could be what's keeping 6c/8c ryzens behind,memory latency
huge gaming cache will run out at some point too
3600 vs 10400 is a massacre,30% faster on intel
Slow != Bad. Like with Raised by Wolves. Quite a few people here wrote that it is bad. And I had low expectations as result. Even postponed watching till there were no good shows around.
It is slow paced, but I enjoyed it from 1st moment.
Time to finish DLSS process is affected by card. 3090 is able to finish in fraction of time it takes 3070 to finish DLSS processing due to number of available tensor cores.
That ending tho.. lol
Thanks for the the benchmarks
Who in the world is playing @ 1280x720 ,never mind only a Intel fan would post this crap. Come on no one is playing at 1280x720 with Intel/AMD high end CPU that is just ridiculous.
this is absoulutely not true and I don't know where you'd get an idea like that cause it's never been the case in any of dlss games I've seen benched.
scroll down to dlss results
2070 S has a bigger perf uplift from dlss than 2080ti
all ampere cards have equal number of tensor per sm
Yep, they could have thought something better. Especially since that thing is actually copy.
Resolution does not really matter much.
It for example says that even on 720p, 6C/6T CPU like 9600K averages at 60fps and dips down to 35fps. Higher resolutions can only get worse from there.
But 6C/12T 10400F (even while clocked bit lower) delivers average 74fps and dips to 46fps.
So 6C/6T has around 1/2 of frametimes worse than 16.67ms (for 60fps target) and dips are quite bad.
While 6C/12T has most of frametimes better than 16.67ms and occasional dips to 46fps may be quite acceptable for many.
But 1st really good experience starts with 5800X/10700K. Those can deliver about solid 60fps and above.
@cucaulay malkin DLSS been discussed to death. I have no reason to repeat same discussions again. You either know or don't.
I don't do reviews
blame pcgh for testing cpus in cpu limited scenarios
Look at the performance difference between 3200 and 3800 on that pcgh test. Drop 3800 on the 5800x and it will do 60 consistently easily.
thanks,that's what I suspected too
memory latency on ryzen is having a bigger impact than usual
meanwhile 10700K on 2933 is fine,9900k on 2666 takes a hit
very cpu intensive game if memory has such a big impact
my 4133 c16 sticks may do a fine job here
Memory will be my next upgrade seeing as i get by on current card and have no issues with DLSS balanced, i'll eye some nice 4600mhz sticks in March.
Currently 3600 c15.
3600 c15 is more than fine for intel's ring bus cpus
especially if you oc the ring bus too
It's the weak point in my rig and they'll be super future proof. I originally had 4400Mhz in this rig.
turning those cascaded shadows range and reso to low,makes a HUGE difference in performance as some have said
what are cascaded shadows ?