EETimes: Nvidia Going All Robot, All the Time 9/14/2018 https://www.eetimes.com/document.asp?doc_id=1333734&_mc=RSS_EET_EDT
AMD’s Radeon MI60 AI Resnet Benchmark Had NVIDIA Tesla V100 GPU Operating Without Tensor Enabled WCCFT contacted NVIDIA to get their latest ResNet 50 performance using Tensor Cores, and NVIDIA sent the latest results for Turing (T4) and Volta (V100). Turns out V100 was operating at 1/3 it's performance in AMD's "staged" benchmark ... AMD is one deceitful company. https://wccftech.com/amd-radeon-mi60-resnet-benchmarks-v100-tensor-not-used/
AMD is no less deceitful than NVidia when it suits their purposes. That's the level of tricks all companies show in their marketing. The only exception is Intel which could be led by the master of downstairs himself, as it's 10 times more deceitful than AMD and Nvidia combined.
I await the outrage from all the same people who attack Nvidia and Intel at the drop of a hat. I suspect I'll be waiting for a long time.
I just thought AMD left this kind of false advertising behind with Polaris or at least use it only on the "gaming crowd". It's astonishing they'd try this on the professional crowd who usually will requires performance validation tests before purchasing. I'm curious to see price of the card.
its not really that cut and dry , the article says tensor mode is mixed precision , it is a different test than fp32, which is what amd claims is typical for the workload (faceid). It was definitely chosen to show amd in the best light, but the argument can be made that fp32 precision makes a difference in this particular test. its not really "deceitful" so much as playing up their strengths and downplaying their weaknesses, which every company does. nothing to get worked up about, anyone that would care about that particular benchmark would probably know what they are looking for (high precision vs performance)anyway.
Oh do me a favor, if Intel or Nvidia had done something like this, the usual suspects would have been all over it, screaming and yelling about how 'evil' Intel is or how greedy Nvidia is. This is the problem with the AMD camp, they are only interested in calling out other companies and never say a word when AMD does something questionable. They have this kooky belief that AMD is some kind of company 'for the people' and AMD is more than happy to exploit that. It's a shame that we can't all just enjoy technology, no matter who happens to manufacture it.
We of the AMD camp also critique AMD when they bone up. Go read the Radeon Forum if you don't believe. No Step On Snek
This particular one seems to have escaped notice in the AMD section while being defended here...just sayin...
Intel benched Epyc's Linpack performance the way AMD recommends it, the way Linpack should be run -> HT/SMT=disabled, and all hell broke loose Accuracy argument is a poor excuse. It's like saying AMD's Rapid Packed Math(FP16) reduces IQ, instead of properly computing whats required. Potentially... it sure does -> Then again if you do stupid things, FP64 won't help you.
thing is the MI60 can do mixed precision aswell. I dont think either nvidia or amd would want to compare one card running in fp32 mode to a card running mixed precision, it would not be a fair test, the wccftech article is basically clickbait, doesn't really conclude anything. Asking amd for some Mixed precision workload data would have been they way to go, since that information might actually be useful for comparison. as for stayng on topic I'd imagine nvidia is accelerating their own chip design using AI, Only a matter of time before they completely automate that process. when nvidia elects an AI to the board of directors, thats when i will worry.
"AI is one of the most important inventions in the history of humanity. Its potential to bring joy, productivity is surely unquestionable. We at Nvidia believe that the best way to keep the tech in good hands is to democratise it. That's why Nvidia's GPU technology, and CUDA, are open. It's in every single cloud, it's in every single computer and we make it available to anybody who wants to use it. The benefit is that there are more good-hearted people than there are less good-hearted people, and if you simply give them access, they will keep it out of harm's way. The collective good nature of humanity will keep it out of harm's way." A well designed AI intellect could have not argued it better. The collective will protect us.
Don't worry, while I believe just about anything is possible, I also believe there are things that are not possible...the moon landings were not faked, and Tillamook Mint Chocolate Chip ice cream is GD delicious.
November 14, 2018 https://www.pcworld.com/article/3321144/components-graphics/battlefield-v-dxr-rtx-ray-tracing.html
Justice – Enhanced with NVIDIA RTX Ray Tracing and DLSS Justice is simultaneously utilizing both ray tracing and DLSS to deliver a superior experience for players. In addition, Justice is the first game in the world to feature real-time ray-traced caustic effects. As demonstrated by the interactive comparison below, caustics are light rays re-focused or scattered after hitting reflective or refractive surfaces, which in turn create a new light source that can illuminate the surroundings and cast shadows. This ensures that caustic effects react appropriately to objects, changing scenery and lighting conditions, and even wakes of boats. And finally, to maximize performance, Deep Learning Super Sampling (DLSS) is employed at 4K and 25x14. It boosts performance by a significant degree, can improve image quality, and its anti-aliasing has better temporal stability and image clarity compared to commonly-used Temporal Anti-Aliasing (TAA) techniques. For further detail, check out our comprehensive write-up. In Justice, DLSS accelerates performance by up to 40%, giving GeForce RTX GPUs up to 90% faster performance compared to previous-generation GeForce GPUs. And in terms of image quality, it can make detail clearer and sharper, https://www.nvidia.com/en-us/geforce/news/justice-online-geforce-rtx-ray-tracing-dlss/
Not familiar with Face ID's requirements but found this paper interesting. NVIDIA’s proprietary automotive networks train with mixed precision matching FP32 baseline accuracy. Doc creation date: 4/3/2018 EDIT: ah crap. @Noisiv You beat me to it my bad. I'm leaning towards your side. With NVIDIA having published results for years, this was bad optics for AMD. Their response is damage-control.