Radeon Fury X Beats GeForce GTX Titan X and Fury to GTX 980 Ti: 3DMark Bench

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 17, 2015.

  1. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Not really. Aside from dropping tessellation performance it's still a major hit with all the updates. Farcry had the same problem, the fur and all that stuff -- it kills fps for a marginally better visual increase. And it wouldn't be too big of a deal, but TressFX does nearly the same thing and it performs significantly better. (I personally think the Fur in Farcry looks like sh*t)

    Pretty much everything Nvidia shows with gameworks is meh. The flame thing they show every conference for the last two years or so.. "wooooooooo realistic physics smoke particles weeeeowooo" its like yeah I get that a physics simulation of a flame is cool, complex, hard to calculate, but when you're demoing a single flame on a black background with a Titan X and it still looks like it's sub 30 fps -- I think you're doing something wrong -- no one wants that **** in a game.
     
  2. waltc3

    waltc3 Maha Guru

    Messages:
    1,445
    Likes Received:
    562
    GPU:
    AMD 50th Ann 5700XT
    I ran two of them in Crossfire for a couple of years...Heck, I can count the number of driver problems I've had with the Catalysts on one hand--since 2002, when I began using nothing but ATi GPUs after 3dfx went belly up. It's been my experience that people who say that about AMD drivers are just repeating something they heard from someone else, who heard it from someone else, etc. It's for sure they've no real experience with the products & drivers--or if they did then they had trouble because they didn't know what they were doing, basically. If the drivers were systematically bad we'd certainly be aware of it, don't you think?...;)
     
  3. TheSeekingOne

    TheSeekingOne Guest

    Messages:
    10
    Likes Received:
    0
    GPU:
    8GB
    Bull! I don't recall having any graphics related issues with my 1650XT, which I bought back in 2007 and still use in my old Athlon X2 rig.
     
  4. meth curd

    meth curd Active Member

    Messages:
    94
    Likes Received:
    2
    GPU:
    2080TI FE
    anecdotal evidence goes both ways. 2 years of struggling with a 7990 and its crossfire and general driver issues made me simply not give a **** about this release and just go for whatever nvidia was offering. amd releasing a card that can challenge tx/980ti is a good thing because we need the competition (or were all going bankrupt) but im personally done with amd for the foreseeable future. well see if their driver release cycle changes with win 10.
     

  5. Crazy Serb

    Crazy Serb Master Guru

    Messages:
    270
    Likes Received:
    69
    GPU:
    270X Hawk 1200-1302
    I had to make account once I read about dead cards and in every thread, there is AMD driver issue. Once upon a time, there was nVidia 196.75 WHQL (I think that I nailed driver version, you just cant forget things like that) and driver increased performance but there was only 1 issue. Auto fan did not worked and fan would remain on its idle starting speed. After 2 BSODs in Mass Effect, I turned on AB OSD and saw steady fan speed at 30%. But it was too late, card was damaged and had random BSODs after that more frequent as time passed, baking card helped couple times until card completely died.
    Thats what bad driver is and it had WHQL sticker on it! I personally never had driver issues (except that one) with both nV and AMD/ati.

    As for PhysX, I never used it since you literally need(ed) dedicated card for it to have smooth fps. Borderlands 2 is an example that even 580GTX was weak. Furthermore, Gearbox confirmed that PhysX on CPUs is limited otherwise I think that i7s would do better job for sure...

    Also, from what Ive seen in witcher 3 benchs, 7xx takes far larger fps hit than AMD cards, but it seems that 7xx is like 10+ year old so who cares?! nVidia title, with day 0 driver and all that I can see is that 270X outperforms 760 by 20% and it is still 40 euros cheaper in my country (which is ~20% on some 200 euros). One of my friends is really dissapointed with his 770...
     
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I am more than happy with AMD's driver release cycle. There is no rational reason to release WHQL unless that driver has some new feature in 1.0 version (read good enough and stable to be for everyone).
    AMD releases like 1 beta driver per week in average on W10 and there are some side leaks as bonus.

    I actually think it may confuse a lot of people. Only thing what AMD has to do on driver level is to deliver all promised features and DX12 at W10 launch.
     
  7. blkspade

    blkspade Master Guru

    Messages:
    646
    Likes Received:
    33
    GPU:
    Leadtek Nvidia Geforce 6800 GT 256MB
    Well you are dealing with both the 970 3.5GB issue and the fact that Nvidia tends to consume more VRAM than AMD at similar settings.
     
  8. davido6

    davido6 Maha Guru

    Messages:
    1,441
    Likes Received:
    19
    GPU:
    Rx5700xt
    so we have a new king wooop wooop
     
  9. meth curd

    meth curd Active Member

    Messages:
    94
    Likes Received:
    2
    GPU:
    2080TI FE
    i havent said anything about whql certification or support for unreleased operating systems but its good that youre happy

    this might help with having a more informed opinion tho

    http://support.amd.com/en-us/download/desktop/previous?os=Windows 8.1 - 64

    e: i guess you can add omega and beta 15.5 to the list (which are half a year apart)
     
  10. MADOGRE

    MADOGRE Guest

    Messages:
    11
    Likes Received:
    0
    GPU:
    Gigabyte 980ti G1
    Have you looked at the test on how much memory 3dmark uses? @ 1440p it like 1.5gb so memory has little to do with this test.
     

  11. Stukov

    Stukov Ancient Guru

    Messages:
    4,899
    Likes Received:
    0
    GPU:
    6970/4870X2 (both dead)
    Don't know whats the deal with all the new people making accounts just to complain and make inaccurate statements about the Fury X, but a large memory pool is needed when there is insufficient bandwidth to swap necessary things in and out of the pool.

    If you had a billion billion GB of memory bandwidth, so long as the file wasn't too big for the pool in totality, it wouldn't matter how big the pool is.

    Far as if 4GB with 640GB of memory bandwidth is enough, we will just have to wait and see from the benchmarks and stop making ignorant statements.
     
  12. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Why would the bandwidth have an impact on how much is needed to store? The game is pulling that data in from the HDD and is storing it in memory for the GPU to access. If I have 4GB of textures that I need for the scene I'm rendering, it has to be in memory, regardless to how fast the memory bandwidth is. Now how you store that data, whether its compressed, or you're doing some driver tricks to store less and remove it quicker when it's no longer needed, that's a different story. But the memory bandwidth itself shouldn't effect the total. For the most part I think 4GB is sufficient. Especially when 99.9% of people aren't even running 4K monitors, which is probably only time 4GB+ gets utilized.
     
  13. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    You are completely right, but HBM2 will only come next year for Pascal ( Nvidia ) and the next AMD GPU's series , so my comment was more about actual production of HBM.

    And when i read other comments , i understand there's a lot to misunderstood between caching, textures pin pointed, etc..

    As i was said, memory in GPU compute, or graphicss is a real real, complex things...

    I can put 500x more triangle in a scene on Blender compared to the most finished games, with a level of graphics features, absolutely unbelievable in a games (lets pass the details ), and with this extreme complex scene, in raytracing with thousand of 16K texturess, i can only push 2GB of ram maximum,....
     
    Last edited: Jun 17, 2015
  14. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Idk, my thought process is this:

    Take a single frame in a game -- outdoor scene. You have grass, mountains, sky, clouds etc. Those are all textures, lets say it's 4GB of textures. In order to render that frame those textures need to be in memory. It doesn't matter how fast they go from memory to the GPU, they need to be there. Now when you have 60 frames and you need to draw all 60 in a second. That's where the memory bandwidth comes in. You need to pull 4GB of textures into the GPU at 1/60th of a second. Now in reality games aren't like this because the game keeps textures cached for future sequences. So the bandwidth requirement is lower than the example. It's essentially keeping it in the memory in case it needs it, it may not necessarily need it. But the individual frame still has a miminum VRAM requirement and that requirement is a fixed number that's sitting in the GDDR/HBM regardless to bandwidth.

    Now if AMD is using the 285's compression system. Suddenly the reduction that 4GB is only 2.8GB and the bandwidth required to render it to each frame is reduced.
     
    Last edited: Jun 17, 2015
  15. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    .,.....
     
    Last edited: Jun 17, 2015

  16. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,855
    Likes Received:
    442
    GPU:
    RTX 3080
    Thanks for correcting me. Well, in that case I take back what I said, that is pretty impressive from AMD beating NVidia's 28nm offering considering AMD are indeed on 28nm too. I found a link to substantiate what you told me too:
    https://www.techpowerup.com/213517/...-products-based-on-the-fiji-silicon.html?cp=2

    (And thanks to the other couple of guys who corrected me too...couldn't get multi-quote to work!)
     
    Last edited: Jun 17, 2015
  17. kakarot

    kakarot Maha Guru

    Messages:
    1,134
    Likes Received:
    20
    GPU:
    pny rtx 4090

    You can go to 4gb without issue but what AA setting? GTA 5 will not run out of ram even at 1600p maxed out with 2xAA, 8x reflection AA. It uses 8056mb according to their in game counter and there are zero issues with that
     
  18. alanm

    alanm Ancient Guru

    Messages:
    12,234
    Likes Received:
    4,436
    GPU:
    RTX 4080
    I hope these cards kick ass. Not comfortable with Nvidia always being in the lead. More importantly I hope they sell well and create intensified competition that drives prices down for both sides.
     
  19. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Well yeah and in there lies the the issue with GDDR. We are the point now where you have games that are rendering images where a single frame requires 4GB of ram. Now try feeding 4GB across a 256bit @ 224GB/s. It's terrible. Which is why the 290x starts catching up to the 980 @ 4K. Because at 4K the bandwidth requirement to take textures and move them to the GPU increases.

    If there was ever an image that required 12GB of textures to render out, the Titan X would sh*t itself. It couldn't possibly move 12GB of textures over 336.5GB/s fast enough for the GPU to render it at a high rate. Not to mention that the card wouldn't be able to process that much data fast enough.

    HBM solves the first half. It can feed the GPU the data quick enough. Fiji hopefully can handle the second half and process that data and put it on screen fast enough.
     
  20. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,855
    Likes Received:
    442
    GPU:
    RTX 3080
    Although HBAO+ I'm a fan of - you can set HBAO+ in NVidia Control Panel to "Performance" mode and the fps hit is not very large - makes games look good in my opinion, better depth to the lighting. In Far Cry 4 Game Works God Rays were not much of a performance hit either - less so than some of the lesser options on that particular setting. Some of the stuff is a resource hog though!
     

Share This Page