1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Sadly I knew it.

Discussion in 'Videocards - AMD Radeon' started by Krogtheclown, Jun 24, 2015.

  1. The Mac

    The Mac Ancient Guru

    Messages:
    4,409
    Likes Received:
    0
    GPU:
    Sapphire R9-290 Vapor-X
    Its just the reality of dx10/dx11.

    No matter what you do, there is only a single thread feeding the GPU.

    In order to get more efficiency, you have to do "tricks" to get that thread to submit faster, or feed it faster if its idle.

    So called "multi-threaded" is just multiple threads feeding the single submision thread.

    In the case of Nvidias increased efficiency, there are two ways to look at it.

    1. They found a way to push/feed the submission thread faster.

    or

    2. They found inefficiency their code slowing down submission and corrected it.

    Take your pick.

    A couple years ago AMD did some research and found that tiled resources didnt have any benefit whatsoever on their hardware, so they didnt bother implementing it.

    This leads me to believe its more likely choice number 2, and AMDs code is already highly efficient.
     
    Last edited: Jul 5, 2015
  2. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    Nvidia use "dirt" tricks like Gameworks and "clean" tricks like the ridiculous advantage in DX11 performance his drivers have over AMD drivers (100% more efficient).

    Both kind of tricks have something in common:

    They work.
     
  3. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    Not quite.
     
  4. The Mac

    The Mac Ancient Guru

    Messages:
    4,409
    Likes Received:
    0
    GPU:
    Sapphire R9-290 Vapor-X
    wow, i give a comprehensive analysis, and thats all i get?

    you guys suck

    whatever..

    there is a reason no one that matters takes this site seriously...

    and samarbella, you are on ignore, unless someone qoutes you, i dont see it...
     
    Last edited: Jul 5, 2015

  5. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    You gave your opinion on the matter and that's fine but at the end of it all you said there's nothing wrong with AMD's code when clearly there is. In some cases NVIDIA has 100% more draw calls/second and you call that "at the limit of efficiency" so you null everything before it.
    An i3 with a 750Ti and a 280, the 750Ti performs the best of the 2 with the lower powered CPU. That's not high efficiency that's poor coding.
     
    Last edited: Jul 5, 2015
  6. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    The extra drawcall gains you speak of for Nvidia, is it also there on the 780 or just Maxwell?
     
    Last edited: Jul 5, 2015
  7. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    Both Kepler and Maxwell.
     
  8. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    I'm using the 1040 branch of drivers. They are great but can improve.
    I'm looking forward W10 and the promise of what DX12 brings to the table.
    I will make my final judgement about AMD drivers then.
     
  9. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    I asked AMDJoe about DX11 overhead as there hasn't been any reasons for the poor overhead from AMD.

    DX12 is different entirely and we won't have to worry about overhead anymore.
     
  10. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I don't care in wich list you include me or not or if you reply to me or not.
    I know that been in your list of ignored users is not enough to be ignored by your replies or your insults.

    Maybe one of the reason this forums is loosing his credit as a serious forums (your words not mine) is there are some users like you who seems to be imune to mods action when they insults other users when they want.

    You have a very long list of insults you normally use when you run out of arguments to discuss like an educated human been what sadly occurs frequently.

    I'm free to reply to you when you have arguments or even when you insults.

    As always and for newcomers who don't know your background and mine : Don't expect an insult from me.

    Have a nice day (or evening?, or night?)

    :)
     
    Last edited: Jul 5, 2015

  11. Bleib

    Bleib Master Guru

    Messages:
    371
    Likes Received:
    1
    GPU:
    MSI RX 480 8GB
    It's a tremendous leap in efficiency and performance on the AMD side.

    A bigger problem is the cooler, it's inexcusable not make them quieter by default.

    I might get a nano myself, but only if I can change it to a better cooler. And, if it doesn't cost an arm and a leg.
     
  12. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    AIO closed WC solutions like the CoolerMaster used in Fury X are not the best solution if you expect silence.

    The size of the fan (smll) and the pump design (not the best around) don't allow it.

    I own a CooolerMaster Nepton 140XL, an AIO closed WC CPU solution for CPU:

    http://www.coolermaster.com/cooling/nepton/nepton-140xl/

    It worked fine but the noise it produced when i needed to force the fan speed to lower temp was HIGH despite his not so small fan (140 mm), now it's in his box cause i use a custom loop with CPU and GPU blocks.

    No wonder EK already developed a Fury X custom full WB for people using custom WC loops, there is a market for them :

    [​IMG]

    http://www.tomshardware.com/news/ekwb-fury-x-water-block,29449.html

    EK launched also a custom full WB for 295X2.

    Did i say i love EK?

    I have 2 EK 290X lightning custom full WB and a EK CPU WB in my custom WC loop!

    :)
     
    Last edited: Jul 5, 2015
  13. flopper

    flopper Member

    Messages:
    28
    Likes Received:
    0
    GPU:
    xfx 6950 2gb

    that wc block and the fury looks awesome
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    6,972
    Likes Received:
    120
    GPU:
    Sapphire 7970 Quadrobake
    In the long term, it is the 4GB that worry me most about the Fury X. I believe that they will find a way with the driver eventually (they have made great strides already, but it is apparent that their driver still can't "feed" gpus fast enough), and that the card will end up being much faster than Maxwell, at around Pascal time.
     
  15. vase

    vase Ancient Guru

    Messages:
    1,653
    Likes Received:
    1
    GPU:
    -
    in the long term (which means dx12 implementation and wddm 2.0) you buy 2 x fury for lets say ~1200 and you have 8 gb which are usable and speeds that are unbeatable...
    and thats not even long term... thats mid term... it will start this year.
     

  16. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    Games have to support VRAM stacking.
     
  17. AMDJoe

    AMDJoe AMD rep

    Messages:
    115
    Likes Received:
    0
    GPU:
    AMD
    Only time will tell I'm afraid. I'm really excited to see how the first DX12 games perform on both AMD and Nvidia hardware. It's a really exciting year for hardware and PC gaming.
     
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    6,972
    Likes Received:
    120
    GPU:
    Sapphire 7970 Quadrobake
    If it was NVIDIA's code that was subpar, then with the DX11 driver released shortly after Mantle they should have reached parity with AMD. What happened is that they got 3x the efficiency on top of AMD's DX11 driver. Since I can't believe that NVIDIA's hardware is that much better (I am of the complete opposite opinion actually, tremendous software, mediocre hardware), then AMD's code is not even close to being that efficient.

    To be fair with you, I would prefer if we had to deal with engineers from you and NVIDIA any day, compared to 99% of the game developers. DX12 is awesome for you, since it will take a tremendous weight off your hands (eventually). What I hope is that developers and middleware/engines have enough automatic stuff on so that games are at least decent, and that the extra manpower you might have goes into improving the DX11 driver which will keep being very relevant for at least a decade.
     
  19. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,053
    Likes Received:
    40
    GPU:
    RX 580 8GB
    How can you say that? AMD crossfire support is below acceptable, it has been for awhile. It can't get much worse for multi-GPU users.
    The benefits to DX12 is in the hands of the developers and before with DX11. Overall workload will probably have to increase in DX12 as it's harder to develop with low-level API's, but multi-GPU support should be easier to optimize as they have more direct control of the hardware which makes debugging easier too.
    If they are lazy and don't spend any time/money on it then it will be the same as it is now (not very good).
     
    Last edited: Jul 6, 2015
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    6,972
    Likes Received:
    120
    GPU:
    Sapphire 7970 Quadrobake
    All modern engines are deferred renderers. They are not made for multi-gpu in mind. Every time a game like that goes out, NVIDIA and AMD engineers have to literally change how the driver and the cards interpret the calls from the game that someone with less knowledge than them has ****ed up.
    Since that's the competitive environment at this point I don't excuse any of them (NVIDIA's world is far from perfect too), but I would prefer all this manpower to transfer from SLI/CFX profile nonsense to actual driver and features improvements. And yes, I would trust an experienced AMD/NVIDIA driver engineer 1000% more than any developer, including Carmack and Tim Sweeney.
     

Share This Page