Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 12, 2019.
Value purchase then?
Absolutely ! They are still 10-15% faster than a 2080 ti, and cost the same as a 2080 ti does now. Which is also the reason i couldn't in any way justify buying a 2080 ti. But ofc it sux as more and more games don't support sli. Luckily there are still a dedicated bunch working on unofficial sli support, which still makes sli work in most games
I would totally buy it if they made a new gpu that had a huge improvement in traditional rasterazation performance and no RT cores.
Will this have X64 tessellation hair again?
Because 2x 2080 are about 40% faster in 4K than than a 2080 Ti for practically the same price.
True, but as a current and many year sli user, i know that sli is anything but a smooth ride at this point.
As i wrote further up, many games don't support sli now, so you will have to spend quite some time messing with custom sli profiles in order to make it work. And when you do make it work, it usually has quirks. Such as particles and weather effects not working with sli enabled in bf5.
I had sworn to stick with sli, but nvidia has worn me down with them completely dropping official sli support (undoubtedly their intention in order to force sli users onto overpriced single gpus such as the 2080 ti)... so i am changing to the fastest single ampare gpu when it launches at some point.
There's nothing wrong with that, I just felt like the 2080 Ti was a bit of a ripoff at their pricing and as a new dad I'm trying to be responsible and live on a budget.
As far as SLI support, It seems like the community has done a really good job of keeping up the manual profiles and given the popularity of CP2077, I'm hoping people will do a good job figuring out the compatibility bits. I got my first 2080 for about $700 and I see them on deal as low as $650 right now, maybe by the time the game comes out I can grab a 2nd for like $550. That's not bad compared to the current 2080Ti pricing of like $1200 to $1300.
If nobody can figure it out, I guess I will sell my 2080 and buy a 2080 Ti....
Can't disagree about that... the price increase from 1080 ti to 2080 ti was bigger than the performance increase !
Yeah, there is still a fairly large sli community who depend on the custom sli profiles... but sadly they only work with dx11... so if they go dx12 only with cyberpunk (which they might to appease nvidia with their dxr crap), then sli is off the table
I hope that ampere will be out at the time cyberpunk is released... and gives a massive performance increase over turing! (Not counting dxr... they can cut that crap off the chip for all i care)
Didn't realize that, bummer!! Well I guess I will just cross my fingers.
I think the market for "GTX" is huge.
However, would you still get the GTX3080 over an RTX3080 if the RTX version was only say $200 more? I'm going to assume that they'd also increase the tensor cores enough to make RTX work at 1440p 60fps+.
For me personally, in this scenario I would have to go with the RTX3080 as it looks like devs like RTX and I foresee many more games supporting it. RTX isn't going to be a fad, it's going to become part of the "standard".
It's sad that people actually want to have a company remove technology and stunt the growth of visual improvements.
Hopefully the next generation has no GPUs in it that do not have RTX, so this idea that we should stop trying to improve is gone.
If you want a GPU without RTX features, go buy a navi or vega 2. Get the same rasterization performance for the same dollar amount, and less features.
Anyone who plays online competitive games (aka the vast majority of users) only want / need the best possible performance they can get... raytracing drastically reduces performance, so anyone who wishes to remain competitive ought to have raytracing turned off... meaning that the rt cores become completely useless, and does nothing but add to the chip size and complexity, thus increasing the price of the chip significantly. Not to mention it makes the chip draw more power and become hotter.
So for the vast majority of gamers they would be better off with a smaller, simpler, cheaper gpu without rt cores etc, that excells in traditional rasterazation performance.
I would get the gtx version in any scenario, as i would never use raytracing in the first place... huge waste of performance, and better performance would be the reason i wanted to upgrade.
That was Witcher only thing
There is only one issue. 2070 vs 1660Ti.
2070 has 64% more transistors and is in general some 37% faster. You can compare SMs of both GPUs to find out where they differ and what kind of transistor investment went to those differences.
Or you can just take this: Turing which is enabled to do RT has some 16% worse performance per transistor than Turing which is not enabled to do RT.
Now, imagine that There would be no GTX 1660(Ti), 1650(Ti). But there would be RTX variants with same transistor count and same price as result... all 16% slower.
(Too weak at RT and slower at older techniques than needed.)
Apparently, if nVidia manages to double RT capability without drastically increasing transistor count of SM, then even GPU that can do like 75~80fps on average before enabling RT, could be usable for RT. (Even while average is horrid keyword here as fps dips would be to 40s...)
Dude, I so know what you're talking about. I suffered through just this kind of motion back then, and I instantly got reminded reading those lines.
One thing I can say for sure, since going with only a single GPU, I have not had issues with drivers in the last three to four years. Not a single driver messed things up really, if one was doing weird things, I simply switched back a gen and waited for the next. Issue resolve is a matter of 5 minutes this time.
I doubt if you're not actually running RTX features that it will do this, so no, this scenario doesn't happen.
If that was true, then a chip would consume 0 watts in idle... which it doesn't, so it isn't.
Actually, what you're assuming is that the tensor cores will still contribute to more power usage and heat when not being used. I don't think so. At least not in the way you made it out to be. In absolute terms, yes, there will be some insignificant power-draw.
In practical terms, we expect a card running/using RTX to use more power and produce more heat vs RTX off.
The 16xx series will be a once off.
um, no. the framebuffer is still projecting an image to the screen, you demonstrate a classic case of not knowing how any of it works.
all modern gpu's employ tech to selectively turn off parts of the core but you cannot get to absolute zero power usage, ever.
have a read