Radeon Fury X Beats GeForce GTX Titan X and Fury to GTX 980 Ti: 3DMark Bench

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 17, 2015.

  1. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    even with +100mV? whats your model?

    I have reference XFX. I can do 1200/1500 benchmark suicide runs. Elpida :bang:
     
  2. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    [​IMG]

    Of course this +100MHz OC can turn out to be a weird way of AMD saying: Yes, you can OC Fury.
    You never know with these marketing guys...
     
  3. ---TK---

    ---TK--- Guest

    Messages:
    22,104
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    That guy must of never heard about multi GPU tbh.
     
  4. ScoobyDooby

    ScoobyDooby Guest

    Messages:
    7,112
    Likes Received:
    88
    GPU:
    1080Ti & Acer X34
    Bingo. Its easy to tell who has experience in this thread and who doesn't. Thank you for posting this.
    Anyone who's done their research on the Korean monitors will know this otherwise they risked buying the wrong type.. I know I almost did, before I switched my order after I'd placed it.
    On a sidenote, mine will hit 100hz, 110 if I push it, but that's it, and that's plenty good enough for me.
    But, point being, DVI on its own is fine.. saying it needs to die it stupid.

    :wanker: well played

    Yes apparently 120 and 144 hz monitors are being sold because nobody plays above 60fps. There's some logic for ya.
     

  5. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    Are you the type of person that complains when people talk about bugs in games or other software and tell them to live with the bugs or write the code ourselves?
    That's the same thing as complaining about poor ports or a display connection that the majority of monitors have is suddenly removed in the new generation of graphics cards. If we don't complain, nothing will change. Criticism is bitching to those that can't handle criticism.

    HDMI is 13 years old should we kill that off too and just all migrate to DP?
     
  6. Chillin

    Chillin Ancient Guru

    Messages:
    6,814
    Likes Received:
    1
    GPU:
    -
    HDMI has been updated as recently as two months ago, can you say the same for DVI?

    And your analogy is poor. A more proper analogy would be people crying that their Windows XP system can't play some new games even though they just bought the system.
     
  7. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    Never mind. It's silly arguing over DVI :)
     
    Last edited: Jun 19, 2015
  8. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You are still at ports? C'mon guys...
    Half of you sell your used HW for 1/2 of current price for new, other half like me just gives it away to someone around.
    If I did not have DP on monitor and went for Fury X, at same day some family member would be very happy to have 120Hz 1080p monitor in perfect condition.
    As Fury X comes, I am giving my Accelero HD7970 to cousins wife gaming PC.

    Take it this way, how many monitors you can have through DPs connected in SST mode to Fury X? How many monitors you can connect to its DP in MST mode?

    AMD was aiming at 4k screeners and tripple screeners with this.
    With 2x Fury X / 2x 980Ti you get performance good enough to have 3x 1440p screens and play games at decent frame rate.
    So why not to have them all connected via same connection type which uses same color transfer standard therefore all shades are same.
     
  9. Stukov

    Stukov Ancient Guru

    Messages:
    4,899
    Likes Received:
    0
    GPU:
    6970/4870X2 (both dead)
    The buffer would load and swap between system memory and GPU memory across the PCI-E bus. The GPU only technically needs whatever is drawn on the screen at a time. The reason you need large pools of memory is if the file size is that large and swapping incurs a heavy penalty, or the bandwidth is heavily limited so you have to wait cycles for the allocation to swap from system ram to GPU ram.

    Sufficient bandwidth without large files, won't incur a cycle penalties.

    There is likely few games that have so much data that over 4GB of memory is required every second active frames are drawn. The reason over 4GB is if you have more data to swap than bandwidth will allow for.

    Think of it this way. Imagine you have a bucket filled with 4 gallons (or liters for the rest of the world) different kinds of liquid. Let's say the bucket preloads at 3g. Suddenly there needs to be more 2g blue liquid, but 1g green is no longer needed. If your hole in the bucket can swap 3g a turn, you can remove 2g of blue and you can pour in 1g of green, you don't have to stop and wait to keep going.

    If you have an 8g bucket and you load it up to 3g, need to remove 1g green and add 2g blue, but your hole only allows 1g at a time, you have to wait 1 turn to add the 2g of blue, but since you have 8g total going over that to 5g won't cause you to take another hit, where if you had only 4g, you would have to first remove 1g, then add 1g of blue, then another 1g of blue for 3 turns.

    Make sense?
     
  10. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    You don't get to double dip in huge bandwidth and say OH i don't need big mem pool, i'm too fast for y'all/
    Yes you are fast, not by choice, but because of necesity, and you need all that bandwidth.

    Huge bandwidth is necessary to properly support huge GPU geometry, not to cut down on mem pool size.
     

  11. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    But how is memory bandwidth going to increase the swap speed? You're limited by PCI-E/DDR speed when swapping stuff in

    If you have a 4 Gallon bucket, 2 gallons is going to be dedicated to what you need immediately, while the other 2 is dedicated to what you may need. When the GPU renders out a frame, those 2 gallons are being read but not drained. They are still in the bucket. If you suddenly need something that isn't cached, gallons 3/4 need to be removed and new data swapped in. But that speed isn't governed by the memory bandwidth, it's governed by the PCI-E bandwidth. And generally the GPU is preloaded enough stuff that it's never relevant, unless you're actually limited on total memory. Where HBM comes into play, is when you have a 4 Gallon bucket and the GPU needs all 4, every 1/60th of a second. In that case GDDR5 can only read 3 every 1/60th so the GPU waits for the other gallon to be read.

    The 970 fiasco is a perfect example of this. Nvidia has a heuristics system built into the driver that keeps the most frequently used textures in fast memory. If it has to, less used stuff is swapped into the .5GB partition. If the game is rendering a frame that requires more memory than both the 3.5+.5GB partition it pulls it from system memory, which is significantly slower than both.

    The rate at which the memory is read by the GPU doesn't increase the rate at which it's swapped in/out from the system.

    That being said, the memory swapped into the RAM can be compressed (Which is what Maxwell/Fiji/Tonga are doing better in their respective cards). Also AMD claims that in the driver they can make improvements in the way in which the data is stored/pulled from memory and cached. So in our 4 gallon example, AMD is saying those 2 gallons that's are needed immediately is being compressed to 1 gallon so 3 gallons can be used as cache. But if suddenly the game requires 3 gallons to be used, the card can be smarter about the 1 cached gallon.
     
  12. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    The memory its self isn't being any more compressed than before. DXTC and all that.
    They ARE majorly improving the way that it is stored/fetched. That is the Tonga/Fiji upgrade. They only mentioned compressed ways to acces/store the data, not all the data itself.
     
  13. Rich_Guy

    Rich_Guy Ancient Guru

    Messages:
    13,138
    Likes Received:
    1,091
    GPU:
    MSI 2070S X-Trio
    12K Gaming With One AMD Radeon R9 Fury X Graphics Card

    Source :- http://www.legitreviews.com/12k-gaming-with-one-amd-radeon-r9-fury-x-graphics-card_166585
     
  14. Stukov

    Stukov Ancient Guru

    Messages:
    4,899
    Likes Received:
    0
    GPU:
    6970/4870X2 (both dead)
    That is the $649 question.

    To be honest, we won't know until we get the benchmarks and run some tests. Part of me wants to say I highly doubt AMD would debut a flagship without having some data that shows it won't be an issue. The other part of me has seen AMD **** some things up with over optimistic pre-launch-marketing.

    If I had to guess, with the maximum 16xPCIE 2.0 speed, it will most likely be fine, though lower speeds might suffer at higher resolution. The other guess I would make is that it will make the CPU/Ram and entire subsystem to be more taxed (or used if you prefer).

    It will probably be fine on high end systems, but suffer penalties on crappier or older systems. However if you are running 4k, 8k, or 12k, you probably have a top of the line system. At 1080/1440p it probably won't have an impact.
     
  15. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I can write it thousand times and it will still be ignored.
    Take people with SLI 970/980 or CF 290x, those are 4GB cards (give that small benefit to 970 here).
    If one card (Fury X) will be broken by 4k because it has to constantly load data from RAM through PCIe, then those SLI/CF will be broken twice as much because for normal systems you do not have 2x PCIe 3.0 x16 available (limited by total number of lanes).

    I do not see all around threads stating: My CF is not working on 4k, My SLI is performing poorly on 4k. (or heavy stutter, because that is what you get as you run out of vram for real)

    So either there are absolutely no people running those cards in SLI/CF and sporting 4k screen, or it simply works.
     

  16. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    Fury X is AMD's first 600mm2 GPU, its AMD's super-high-end. Fury X is NOT(!) competing against 980 or 290X. It's competing against +50% more VRAM 980 Ti.

    I am sure it will be one fine card. But if you are subscribing to Gameworks conspiracy theories(?), then Nvidia and their minions :) will be able to sink it at will by increasing VRAM req.

    Fury X has not even been released, and 980 and 290X have been with us for quite some time.
    You are seriously equating future VRAM requirements and expectations to those of 1 or 2 year old cards, really? 390/X are already coming with 8GB, and competition carries 6/12GB :3eyes:

    Again, I'm 100% sure it will be a great card, just don't try to persuade us there is absolutely nothing wrong with 4GB. Just because because older cards also have 4GB and just because AMD decided to "throw few engineers at the problem".
     
  17. Stukov

    Stukov Ancient Guru

    Messages:
    4,899
    Likes Received:
    0
    GPU:
    6970/4870X2 (both dead)
    You seem to be ignoring why large pool of memory is necesary. In theory, these cards could use DDR3 memory instead of GDDR5, then have 100GB of GPU RAM. If they did and someone came out with a 50GB GDDR5 version, are we going to say it is suddenly inferior?

    These are stats on a white page, not end results. Having limited memory pool might be terrible, it might be fine because of the massive bandwidth, we don't know. I don't know where you get we are trying to persuade people there is nothing wrong with 4GB, because its an unknown till tests are done. There will certainly be testing eventually that will look at the 4GB and see with how the HBM is done, if it causes issue at higher resolutions.
     
  18. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    Then you must have missed all those posts saying HBM=super-high-badwidth means 4GB is a non-issue, because 4GB HBM is equivalent of 6/8GB of GDDR5

    or those posts saying that AMD using texture compression on GCN1.2+ means that available pool should be multiplied with 1.5x.

    Combine those two and it follows that 4GB HBM = 9/12GB GDDR5. Right? :D
     
  19. moab600

    moab600 Ancient Guru

    Messages:
    6,658
    Likes Received:
    557
    GPU:
    PNY 4090 XLR8 24GB
    The more i hear about the Fury the less i'm impressed, i don't think it gonna be impressive at all.

    yes it will probably slightly beat 980ti or equal it, but the real question outside the price will come with AIB boards, as we know maxwell have no competition when it comes to overclocking.

    At least, fury could force nvidia hand to lower prices.
     
  20. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I am doing one thing only: "I am attacking 4GB is not enough statement."
    Right now you have considerably stronger multi GPU setups which have same 4GB of vram. If those run out of vram, they'll be affected much more than one Fury X.
    And you do not see it happening. nVidia and time to time even AMD throws those ridiculous cards with inferior memory bandwidth paired with huge amounts of vram.
    Do you even think Titan X can use those 12GB of vram? 337GB/s, you can read those 12GB exactly 28times per second.
    Even if you use just 6GB worth of textures per frame you get to 56fps. That is for Wolfenstein:New Order, because it has every texture unique per object.
    In normal games, textures are reused, so having 6GB of textures as working set for given frame can mean that you'll have to load some of them 10 times. And total pack of data which has to be processed each frame may jump to 10~20GB per frame.

    But luckily for Titan X games allocate even 8GB, but they still are not using more than 2~3GB per frame. And that is why those 4GB cards are not crushed. That is why my 3GB card is not crushed. That is why friends gtx680 2GB is not crushed.

    But everyone to himself, you can go and use modification which improves texture fidelity by 5% and uses double amount of vram per frame and crush those 2/3/12GB cards at will.
    Most of game studios are not that stupid, for stupidity there are modders who do not understand technology they are messing with.
    - - -
    And you can notice some pattern in my posting, I am not writing that Fury X kills Titan X or 980Ti. I do not know such thing. I only keep attacking that 4GB thing.
    In 90% of games I barely touch 1.5GB vram utilization even with 1440p/AA. But I mostly not using either as I love higher fps.
     
    Last edited: Jun 20, 2015

Share This Page