AMD Polaris 11 in shows CompuBench has 1024 Shader processors

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 11, 2016.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
  2. evilkiller650

    evilkiller650 Ancient Guru

    Messages:
    4,802
    Likes Received:
    0
    GPU:
    Zotac 980ti AMP! Extreme
    Interesting!
    So close to unveiling of next generation cards! :)
     
  3. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    Aren't those clock speeds too low? Nvidia has long been able to beat or compete with AMD's GPUs despite using less transistors (that is, cheaper chips) by having higher clocks. It seems strange to me AMD wouldn't use this opportunity to get some power for free doing the same. Unless I remember completely wrong, I seem to recall reading that shifting to 14/16 nm would allow increasing the speed.
     
  4. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,552
    Likes Received:
    609
    GPU:
    6800 XT
    Well they might be going for lower clocks. Who knows how high these might overclock if true tho.
     

  5. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,652
    GPU:
    4090@H2O
    I wonder if 16GB for a single GPU are true (I know they can do it, but do they really need it?)
     
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    It has been benchmarked with some very specially old driver... As it states CL_Driver_Version: 1956.4 (well, it at least matches time of test)
    We have 2004.6

    Then it has 16 compute units and only 1 tenth of Fiji performance (64CU).
    It has basically parameters of HD 7850, but is still 3 times slower? Why would someone test it with old driver? Likely it is modded driver/vBIOS on HD 7850 and downclocked.

    If I did same thing with mine, I can call it Vega Engineering Sample. I can set default vBIOS max clock as 600MHz, but OC it to standard 1050 or above. And world will believe that 600MHz Vega performs as well as 1050MHz Fiji.

    Internet will flop over it at least twice.
     
    Last edited: Apr 11, 2016
  7. Undying

    Undying Ancient Guru

    Messages:
    25,480
    Likes Received:
    12,885
    GPU:
    XFX RX6800XT 16GB
    I though 8GB HBM2 would be enough. Its up to 16GB so maybe we'll see 8-16GB versions.
     
    Last edited: Apr 11, 2016
  8. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,652
    GPU:
    4090@H2O
    Yes I was wondering the same. I know they can, but tbh it feels like a bit of overkill. Wouldn't expect that from AMD, but maybe they have too many of those HBM2 modules? :D
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I do not think Polaris 10/11 will have HBM. No reason for higher complexity if you can do with 256bit bus.

    But I hope AMD will really bring low end parts 1st, this time around. They sell more. And it will allow AMD to learn better 14/16nm. +each passing month means TSMC/GloFo improves those manufacturing processes a bit.

    And on other hand, having released Top Dog cards 1st means that developers will not know performance of Low End and many more games will run badly.
     
  10. xIcarus

    xIcarus Guest

    Messages:
    990
    Likes Received:
    142
    GPU:
    RTX 4080 Gamerock
    If CFX/SLI with stackable VRAM due to DX12/Vulkan will actually scale well, I honestly don't really see them doing this. Seeing as atm 4GB seems just about enough, and 8GB sounds future-proof.

    Although 8-16GB VRAM as standard would encourage devs to use amazing textures. Pretty much the main reason I've been pissed off so much with what's going on with game graphics until recently. Think back 2012. I couldn't see a single bloody reason to have all those shiny and sh!tty shaders instead of better texture quality. Don't even let me get started on those games with ridiculous amount of post-processing effects aimed to hide the crappy texture quality. It would probably be easier to just make better quality textures.
    The best examples would be high-res texture mods for Skyrim or Mass Effect. Those things look absolutely beautiful.

    Sometimes I think the GPU industry is ran by morons who don't actually know what they're doing.
    My opinion as a hobbyist in 3D modeling and rendering: Texture quality and global illumination should be the main focus of improving graphics. We are finally nailing texture quality. Global illumination and other ray tracing effects must follow.
     
    Last edited: Apr 11, 2016

  11. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,652
    GPU:
    4090@H2O
    I have given up hope on anything CFX / SLI related improving...

    yet again, I don't see 2016's top release from AMD needs 16GB of HBM2 vram. It simply wouldn't benefit form it I think, not even at 4K. Also, as HBM2 supply won't be that abundant (at least I think it won't be), it wouldn't make much sense to put more than enough on a single card. Also makes it more costly etc.

    Yeah, Skyrim was the first time I seriously saw what mod textures can improve over stock. I'm not even sure every game is shipped with textures that fit 4K resolutions today...
     
  12. chispy

    chispy Ancient Guru

    Messages:
    9,988
    Likes Received:
    2,715
    GPU:
    RTX 4090
    Same here i promise myself never ever again to go SLI or CFX as they are going the way the dodo bird went ...


    I'm still waiting for a single VGA to handle 4k at 60fps in order for me to upgrade my display and VGA at once but i don't see it happening anytime soon sadly :/ , nevertheless let's see what this new cards will bring to the table.
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    Same here, for both of those things. Even if multi-GPU ends up improving, I still probably won't go for it until there is decent Linux support too. I also don't intend to upgrade my current GPU or display until I can play something at 4k smoothly.
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I somehow can't believe that any of those specs are true. Either the shader processors are too few, or the clock speeds to low, or both. If they are true, AMD is in deep trouble, but they seem very calmly confident about the whole thing which is leading me to believe that all the rumored specs are probably wrong.
     
  15. xIcarus

    xIcarus Guest

    Messages:
    990
    Likes Received:
    142
    GPU:
    RTX 4080 Gamerock
    Totally agree. And when I think that I was so close to pulling the trigger on another 970. Boy would I have been disappointed.

    That's exactly my first reaction when I saw the specs. 1024 stream processors and 128-bit memory bandwidth is very low. It sounds like an entry level card, not midrange.
     

  16. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    The clocks are probably wrong which makes the tflop output wrong.

    In order to get Hitman @ 60Fps @ QHD they need at least 390x levels of performance which is about 6Tflops.

    We know that the GP100 is clocked at ~1300 so I would imagine that AMD can hit around that as well. At 1300Mhz, the Polaris 10 listed there would hit 6Tflops. I think it's the clocks because if it had too many more shaders the chip would be too big for Vega to even reasonably exist. Plus it just seemed low to me when I first saw the leak.
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    Are we living in the late 90s again? You should very well know that frequency has very little to do with the overall performance, especially in GPUs. Think of it like this:
    * There are more GFLOPS compared to Fiji (suggesting there is an improvement)
    * According to other sources, Vega will have nearly double the transistor count of Fiji.
    * We know nothing about the pipelines.
    * There's a die shrink.
    * Better memory will be used.
    * Etc

    I am a little suspicious of the shader processors, but maybe AMD found a way to improve performance without needing as many.
     
    Last edited: Apr 11, 2016
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Of course frequency matters. You can have the most efficient machine ever and if the RPM is low enough everything else will surpass it. Unless GCN has changed so radically, these "shaders" are comparable to Fiji. Having 40% less of them and running them at a slower speed than Fiji, will result in lower overall performance.
    GFLOPs are not the only thing that matters? What about graphics operations, how well do those 64ROPs will work at 1GHz compared to 2GHz? The specs also show GDDR5. The major effects of die shrinks are mostly a combination of cramming in more hardware and clocking it higher. With these specs none of these two requirements happen.
     
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    Yes, frequency matters, but not as much as you're making it seem. As far as I'm concerned, GCN has changed radically. Fiji was 8.9 billion transistors and Vega is supposed to be 18 billion. That is very significant. We don't know where the transistors are going toward, but, maybe they have something to do with the shaders (which results in not needing as much of them or at a higher speed). If the pipelines are long enough, who knows, maybe there's a form of hyper threading involved, but we're just not told that it happens.
    In the case of a GPU, shouldn't there be a direct correlation of GFLOPS to graphics operations? Maybe I'm wrong, but think about it - you're concerned about something that, in AMD's perspective makes no sense. Why would they intentionally make something worse? How likely is it to be a problem if everything else is increased/improved? Anyway, I never said or implied that only one of those hardware changes are enough to make up for a smaller frequency. But collectively, they do matter.

    The point is, just because something seems suspiciously small, that doesn't mean it's worse. And sure, maybe there's a typo in these numbers, but personally, I don't find the specs all that unreasonable.
     
    Last edited: Apr 11, 2016
  20. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,652
    GPU:
    4090@H2O
    I can understand such a feeling, probably it's even ab it more disappointing with the 970 than with my 980 base, because of the performance I get with one card. Tbh, I'm not that much disappointed. It works as good as it gets in the main game I play, BF4, and will do so once BF5 launches, and everything else I play also works fine for now.

    Yet I have to admit I was hoping for more. More SLI support with engines, more chances to see stacked / combined resources with dx12, more performance than under dx11 in general. I was a bit over ambitious I have to admit, as now with dx11 EVERYTHING related to SLI (and CFX I guess) ends up in the devs hands, and they aren't exactly doing a good job overal. And I don't even want to think about optimizations for SLI as many times I wish even basic optimization would be more of the rule than the exception.

    Feels like most engines out there simply don't support SLI (and CFX I guess). That alone should tell a buyer not to invest money in it, I just wasn't aware of it. So effectively a SLI system (to some extent, CFX too I guess) loses in performance at twice the rate due to two cards's performance decreasing over the following years and their games, and double so as SLI support seems to be on the losing end.

    After all I can just say, I mistook the whole situation before I did my purchase. I built my rig to run 144Hz with my current display in BF4 on ultra, and it does exactly that, GPUs not overclocked. With a little headroom I hope to carry that over to BF5, so in the end the rig works like intended. I am just disappointed that the chance to really incorporate all your system's resources at it's most efficient, what I hoped for with dx12, does not seem to be coming around at all. At the moment, we see resources that our system's have (like Fsync / Gsync for instance) being bugged under dx12, games being fps locked, no support for overlays... it seems that with the glorious dx12 we get less and not more.

    Except if you run AMD hardware, that is, and after 2 ATI rigs and now 2 Nvidia rigs (current one the 2nd), cards are shuffled anew. Quite interesting I'd say, my money's already waiting to be spent 2017.
     

Share This Page