How people still don't realise that you don't buy a Titan for gaming (except you really want it). So much discussion about something not even worth it...
Some of you guys seem to be blind to reality when it comes to something you obviously can't afford, because if this card was around $1k, i know many of you would be buying it. And i know it seems to be in fashion to badmouth corporations, capitalism, and of course the wealthy... but the fact remains that the price of video cards has not increased ANY since the year 2000 when you adjust them according to the US Bureau of Labor and Statistics Consumer Price Index. In fact, the most expensive graphics card (non-titan) since 2000 was the 8800 Ultra, which in today's dollars would be around $1,000 usd.
People forget, or better yet, purposely ignore this information way too often. How many times i've heard "Well you can't take into consideration inflation, because $1 is still $1"................................ To be fair, the die size on this thing is drastically larger then anything we've had the opportunity to buy. And it's not for gaming, though can be used for gaming, and is drastically a better price then other products this card is intended for. Titan V die size: 815mm2 Titan XP die size: 471mm2 Titan X die size: 601mm2 etc.etc.etc. The only thing that, sorta, matches it, would be the GTX Titan Z, which isn't the same die size, but it's 561mm2 x2 for a dual-card. And, that card was $2999, the same as this card. As to nvidia using a midrange GPU in the x80 series cards, honestly that's up to the individual perception, because there really is no way to prove that. For instance, i really can't imagine a GTX 1070 as a GTX 1060, or a GTX 1060 as a GTX 1050, etc. And all of that would be how it was if everything was put down the list. What can be said is that their die size has been mostly the same, which from a cost perspective is exactly what a company is going to try and dictate their product line with. Yes, it does fluctuate a lot, but that's normal considering fabrication shrinks and refreshes. And i'll admit, the last couple of generations before the 680 were pretty big. GTX 680 and after GTX 1080 - 314mm2 @ 16nm GTX 980 - 398mm2 @28nm GTX 780 - 561mm2 @28nm GTX 680 - 294mm2 @28nm Average die size: 391.75 mm2 Before GTX 680 GTX 580 - 520mm2 @40nm GTX 480 - 529mm2 @40nm GTX 285 - 470mm2 @55nm GTX 280 - 576mm2 @65nm GTX 9800+ - 260mm2 @55nm GTX 9800 - 324mm2 @65nm GTX 8800 - 484mm2 @90nm GTX 7800 - 333 mm2 @90nm 6800 Ultra - 287 mm2 @110nm Average die size: 420.33 mm2 Compared to the Titans(non-dual): TITAN V - 815mm2 @ 12nm TITAN Xp - 471mm2 @ 16nm TITAN X(pascal) - 471mm2 @ 16nm TITAN X(maxwell) - 601 mm2 @28nm TITAN Black - 561 mm2 @28nm TITAN - 561 mm2 @28nm Average die size: 580mm2 I mean, if there's another way to judge what "should" be an x80 series card, i'm all ears, but i can only come up with the manufacturing costs as a good reason for cards to be more expensive.
Thanks to Brodda Thep for providing some crypto benchmarks on his Titan V ... Code: Algorithm Titan V 1080ti mult LBRY 687 460 1.49 SKUNK 57.8 47.5 1.22 BitCore 36.3 23.1 1.57 poly 47.1 31.4 1.5 Groestl 74.9 58 1.29 xevan crash keccak 1.63 1.18 1.38 Equihash 792 685 1.16 x17 25.85 18.3 1.41 x11evo 29.6 17.2 1.72 veltor 78.6 54.4 1.44 phi 44.4 29.4 1.51 skein 1260 842 1.50 sib 29.3 20.8 1.41 lyra2z 6.12 2.77 2.21 myr-gr 144.7 112.9 1.28 lyra2RE2 48.1 66.8 0.72 hsr 28.4 19.5 1.46 timetravel 54.3 39.9 1.36 c11 38.4 27.7 1.39 tribus 130.3 88.4 1.47 blakecoin 9.54 7.6 1.26 blake2s 8.6 6.26 1.37 ethash 67.3 30.5 2.21 1.53
TimeSpy Overclocked ... 40% quicker: Titan V @1.8Ghz (air) vs 1080TI @2.1GHz. https://www.3dmark.com/compare/spy/2920134/spy/2920319
you can't really sort by branding because the product stacks have changed over the years, imo using the code name for the gpu is probably the better way since the big chip is always chip 0 so it would be more like this g80 90nm 484 mm2 gt200 65nm 470 mm2 gf100 40nm 529 mm2 gk110 28nm 551 mm2 gm200 28nm 601 mm2 gp100 16nm 610 mm2 gv100 12nm 815 mm2 (12m has similar density to 16nm, not really a node shrink) chip 2, is the next step down in size, chip 4 smaller ect. As node shrinks have become less and less frequent die size has increased considerably, and after 28nm price per mm2 has increased rather than decreased like in the past. As you said, gv100 is not an ordinary chip , it was funded in part by the us government for use in a couple supercomputers, and its Really Really huge, its the type of chip that you should wait for a node shrink for, but here it is on 12nm(which is really just 16nm with some tweaks), its not very comparable to older chips. the price is justified by its compute performance , the fact its the fasted gpu, and the manufacturing cost, the yields are probably terrible, given a standard 300mm wafer, you're only gonna get a max of 86~ chips roughly from it. which is very very low. for comparison you would get 150 chips from the same wafer if fabbing gp102's and the number of defects per chip would be lower aswell. Gv100 could be the most expensive mass produced chip ever created, its using one of the newest processes available and its at the max reticle size, you can't make a bigger chip and more expensive chip practically. I don't really see what there is to complain about price wise, considering what it is. the acelerator versions of these are 15k+ the titan v is practically a bargin by comparison.
Peanuts really. A total of $258 million in funding over a three-year period to HPE, Cray, AMD, Intel, IBM, and Nvidia, put toward developing exascale computers. Over the same three-year period Nvidia alone will invest something like 10 Billion USD in R&D. "To make one chip work, per 12 inch wafer, I would characterize it as unlikely. " 53m 35s mark:
I was more alluding to the purpose of such a chip than the total r&d contribution, I doubt nvidia would have built the chip if they couldn't sell tens of thousands of units to the us government at 20k+ a piece(just an shot in the dark really, probably much higher than that. Ive seen claims it could be as high at 100k a piece, since you know "government efficiency"). Its basically designed to slot into a custom ibm power 9 system with nvlink for the sierra and summit supercomputer projects , that is its purpose, for use in large computing clusters. LOL, thats crazy. no wonder.
Any link or source for those benchmarks? I know that hashrates will improve with optimized miners, but nothing there looks like it's even close to being worth buying with mining in mind.
Thankfully, yes. Means people that want to buy any gaming cards later in the release cycle will probably still get some cards
NVIDIA's TITAN V Volta GPU Ethereum Mining Beast Rips 77MH/s While Overclocked https://hothardware.com/news/nvidias-titan-v-volta-gpu-ethereum-mining-beast-77mhs-overclocked
Titan V seems to do quite well in Luxmark 3.0 benchmark https://www.hardwareluxx.de/index.p...volta-architektur-im-gaming-test.html?start=6
Thanks for link, looks like first official Titan V review. Edit: er what?! By actual games it SUCKS really bad., Literally a custom cooled 1080ti 99% of the time, except by new Wolfenstein 2 there is a small boost. Fail.
Drivers probably play a role but I think the majority of the issue is clockspeed and core count. It was expected that this would happen to Nvidia, similar to AMD's architectures as their core count increased. It's essentially the reason why AMD started pushing for low level APIs and features like Async - to keep all the extra shader hardware in their core utilized. Gnex sums it up well here: It also explains the reason why several Titan V owners are experiencing lower than expected utilization rates in relatively demanding games. The clock rate is too low in some games, the GTX variants will remedy this.
I just happen to notice the Titan V benchmarks in the Hardwareluxx review above used a Intel Core i7-3960X , which probably accounts for much of the low gaming benchmark results vs other cards.
NVIDIA Titan V Ethereum Mining Blows Past 82MH/s While Overclocked On Our Test Bench https://hothardware.com/news/nvidia-titan-v-volta-ethereum