Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 5, 2022.
enjoy your cinematic experience at the only rt game that actually looks tiny bit better with it
I did very much,even on 3060ti,wouldn't call 55fps that cinematic ,I guess it'll have to do on a mid-range card.
Don't mistake "run at 100 fps" with the quality of RT... It's 2 things.
And DLSS... the buitoni pizza compared to ***** restaurant when you look closer to texture...
Right now none of the green or red GPU card with RT skill make me think: "wow i need this GPU so bad"
And most of the time RT in game look bad (a bit like 10 year ago ray tracing),too reflective, too over shiny, or too dark.
It's just the 1st stepS of RT in game and there is still some more steps before to be good enough.
The only sure things is that one day, we will all have real good RT card... Just a question of time.
The interesting thing about Cyberpunk though is that DLSS is done very well and adds a sizable amount of FPS with RT on. So even just DLSS is a factor that makes AMD cards less useful. Though they are attempting to do this with Fidelity FX I suppose.
nah it looks fantastic,once you get used to rtgi you'll see the difference when you get back to traditional lighting
I find it more of a difference when you get accustomed to ray traced GI and AO and then switch it off,rather than just comparing odd screenshots.
keep telling yourself dlss at this point is the same as two-three years ago,fine by me,don't care.
Look at the lows in a game made out of 1meter x 1meter blocks, it is between 10-50 FPS.
For a card costing more then 2000€ the performance is still pretty bad in my opinion.
Nowhere did he write that.
EDIT : I can see that Cyberpunk 1.5 DLSS still has problems with grids and fences being rendered a bit funny and sometimes flickering.
Cool let's get everyone buying 3090ti cards so they can enjoy this 'reasonable' RT experience.
there are 10 other options between 3090ti and 6900xt for rt
6900xt owners nitpicking on 3090 rt or dlss is hilarious when their +$1000 doesn't offer anything comparable.
The performance hit in the expense of some perceived improvement in reflections and such is just atrocious for me when u can get way image quality uplift from so many other things like texture quality resolution scaling etc etc...
Of course you bring logic and perception on the table for this.
But guys defending RT's necessity; could u ask yourself and give an honest answer?
Aren't all these just some justification of your owning a graphics processing device having some green logo sticking to it?
you do realize that you're talking about the fastest gaming graphics card on planet earth?
For me the point is not that it is the fastest gaming card on the planet, the point is when choosing raytracing on, something else needs to be turned off and DLSS mostly needs to be used to make up for the high performance hit.
no rt is fine,as long as the card that can't run rt costs a lot less
Ill believe it when the reviews comes out and even then it not 100%. I get 22634 graphics, 20384 CPU and 22265 Score in time spy and a little over 15200 in Port Royal with my little 12900KS/3090TI DDR5 setup and that isn't my best run its just the only 1 I had handy to look at to compare. So not really impressed with the XT or the X3D.
Sure it can, if the card is limited by memory bandwidth at 4k (which it is with only a 256 bit bus), then a 20% increase to memory bandwidth would give exactly that.
People have done a bit of memory OC in the 6000 cards, and they never got a linear performance increase with memory clock.
I dont get nearly as big an improvement in performance by oc'ing my memory as the 3090 ti gets from it's stock higher clocked memory either. Error correction will prevent you from getting a liniar increase from more than a very small increase to frequency... but higher quality chips, that stock runs alot faster, will not have the same issue.
It will likely be a combo of different things giving the 20% performance increase, but a substaintial increase to memory bandwidth will surely be a big part of it.
I doubt that memory is the bulk of that 20% performance increase. It's just an increase of 12.5% in memory bandwidth.
I have never seen a GPU performance scale linearly with memory clocks. We'll be lucky to get 6% performance increase by going from 16 to 18 Gbps.
If those numbers are correct, and there is a 20% performance increase, there has to be something more to these 6x50 cards.
55 fps on average is quite low in my humble opinion even for a single player offline game.
55fps feels the same as 60 with a gsync monitor. For a single player game more than fine i would say.
Depends on the lows. With 55 fps on average there's a good chance the most depending part of the game doesn't feel smooth.