Well I mean it's changed over time.. Tesla's are 5x the cost for not 5x the performance. It's why Nvidia started removing features from the Titan line and adding features like NVLink to the Tesla line - to differentiate the two. Up until Pascal, the Titan's were basically identical to the Tesla in terms of hardware performance in deep learning except they cost way less. The Titan's then were more of a blend between gaming/workstation (FP64). With Maxwell it started shifting towards a blend of gaming/deep learning. But they were still 1/5th of the price for nearly identical performance - just less features. But now with the giant boom in DL, Nvidia changed it up.. made the Titan X (P) a good inference card, as it retains the accelerated INT8 performance from GP100.. but it lacks the packed math (double FP16) for training. Regardless, it's still 1/10th the price of P100 MX2 for half the flops performance in training (I'm sure memory bandwidth and NVLink make that half bit fuzzy) but the point remains that it's significantly more cost effective for small/midsized companies.