AMD Releases Radeon R9 Fury X details + exclusive photos

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 18, 2015.

  1. Athlonite

    Athlonite Maha Guru

    Messages:
    1,358
    Likes Received:
    52
    GPU:
    Pulse RX5700 8GB
    Low speed high volume bandwidth at that compared to DDR5
     
  2. davido6

    davido6 Maha Guru

    Messages:
    1,441
    Likes Received:
    19
    GPU:
    Rx5700xt
    wonder what drivers it will get benched on lets hope not the AMD Catalyst 15.x (15.200.0.0 May 5) ?
     
  3. pharma

    pharma Ancient Guru

    Messages:
    2,496
    Likes Received:
    1,197
    GPU:
    Asus Strix GTX 1080
  4. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    I hope that isn't true as it says FuryX overclock performance but it loses to a 980ti everytime.

    It might be both cards overclocked though.
     

  5. Ourasi

    Ourasi Guest

    Messages:
    294
    Likes Received:
    7
    GPU:
    MSI RX 480 Gaming X 8GB
    It's an error, the first colum is Fury X stock, second colum is Fury X with 100mhz clock bump to show how it reacts to OC. It's with Mantle on a couple of tests and that do not work on any green card. So Fury X vs. Fury X, not the TI, officialy corrected from AMD..
     
  6. pharma

    pharma Ancient Guru

    Messages:
    2,496
    Likes Received:
    1,197
    GPU:
    Asus Strix GTX 1080
    Do you have a "official" link?
     
  7. Ourasi

    Ourasi Guest

    Messages:
    294
    Likes Received:
    7
    GPU:
    MSI RX 480 Gaming X 8GB
    Last edited: Jun 19, 2015
  8. pharma

    pharma Ancient Guru

    Messages:
    2,496
    Likes Received:
    1,197
    GPU:
    Asus Strix GTX 1080
    Thank you ...
     
  9. gerardfraser

    gerardfraser Guest

    Messages:
    3,343
    Likes Received:
    764
    GPU:
    R9 290 Crossfire
    Man I would not like to be in Hilbert Hagedoorn shoes doing this review.
    The reason I say this ,it is going to take forever to do and with the added Frame Rate Target Control it may get confusing for a lot of people if Hilbert Hagedoorn goes into detail.Also of course with the new stuff on the Fury X card.

    Anyway day one buy for me and hope it overclocks like mad.
     
  10. Darren Hodgson

    Darren Hodgson Ancient Guru

    Messages:
    17,222
    Likes Received:
    1,540
    GPU:
    NVIDIA RTX 4080 FE
    I can still see jaggies when downsampling from 4K on a 24" 1920x1200 screen so I can only imagine that the issue is compounded on a 32"+ native 4K display even with an extended viewing distance.

    The reality is that it doesn't matter what resolution you use you will always need some kind of anti-aliasing due to the pixel-based nature of displays. As soon as you have anything other than a straight horizontal/vertical edge then you get stair-shaped jagged lines.

    I think these new cards will undoubtedly be great, drivers permitting of course, but what puzzles me is why AMD would release flagship cards with 'only' 4 GB of VRAM. Even on my GTX 980 with 4 GB of VRAM I was encountering stuttering in games like Middle-earth: Shadows of Mordor when using Ultra textures at 1920x1200 and it is only now on my 6 GB GTX 980 Ti that I can play the game 100% smoothly on maxed out settings. And games like GTA V and Watch_Dogs can easily hit the 4 GB limit at 1920x1200 too so it is going to be even higher at 4K. :3eyes:

    It's one of the reasons I upgraded from a 4 GB to a 6 GB graphics card and I only play at 1920x1200 (using the extra RAM and performance for 4K downsampling too in older less demanding games).
     
    Last edited: Jun 19, 2015

  11. geogan

    geogan Maha Guru

    Messages:
    1,271
    Likes Received:
    472
    GPU:
    4080 Gaming OC
    Solution to all this. We need a new company/manufacturer and most important thing - all it's design and logos must be BLUE.

    Then up to consumer to choose RGB colour preference!...
     
  12. snip3r_3

    snip3r_3 Guest

    Messages:
    2,981
    Likes Received:
    0
    GPU:
    1070
    The only current possibility is Intel, but they might be eventually setting themselves up for an antitrust case if they accidentally wipe out AMD and Nvidia (it isn't hard to point fingers at price fixing when your competitors are non-existent or not really competing). No one else really has the know-how or funds to start this late into the GPU field without some seriously disruptive technology/invention/discovery (it can happen, just it'll be difficult, not to mention extremely expensive).

    Intel's logo/designs are also mostly blue, so they must be it. ;)

    PowerVR is also a potential candidate since they are making great designs, though they don't seem awfully interested in reentering the higher power envelope markets (and no one can really blame them).
     
  13. geogan

    geogan Maha Guru

    Messages:
    1,271
    Likes Received:
    472
    GPU:
    4080 Gaming OC
    Ha ha. Yes you're right Intel would be the only company with the power, money, experience and extremely expensive wafer fabrication plants (the most advanced in world just 3km from my house right now) already up and running to build these GPUs! And already blue too :banana:
     
  14. Chillin

    Chillin Ancient Guru

    Messages:
    6,814
    Likes Received:
    1
    GPU:
    -
    AMD doesn't really have anything Intel wants/needs besides patents.

    Intel's Iris Pro is already faster and more efficient than AMD's iGPU solutions. Intel already has HPC cards (Xeon Phi). What exactly does AMD have to offer it besides an unprofitable company with an either obsolete or redundant product catalog (patents aside)?
     
  15. Frances

    Frances Active Member

    Messages:
    71
    Likes Received:
    5
    GPU:
    EVGA 980Ti
    The only gripe I see is the lack of 2.0 HDMI. What do you think would happen if AMD waited at least another year to release a GPU on the new node? They would go bankrupt. You sound like a troll or a paid spokesman for the jolly green giant. The 980ti is 250 watt rated and may not be as fast as the fury x, so why do you mention the power at all? Do you really think AMD just threw this together right after the announcement or launch of the 980ti? AMD can already compete with the 980ti with slightly slower GPUs that have better prices.
     

  16. Frances

    Frances Active Member

    Messages:
    71
    Likes Received:
    5
    GPU:
    EVGA 980Ti
    The reason they most likely went with only "4gb" of memory is probably the cost of HBM. The increased bandwidth may mitigate the possibility of running out of frame buffer. We won't know until the reviews but I am betting it will be okay as AMD's GPUs have historically had an overkill amount of memory.
     
  17. snip3r_3

    snip3r_3 Guest

    Messages:
    2,981
    Likes Received:
    0
    GPU:
    1070
    No, the reason is a limitation of HBM 1.0 is 4 stacks = 4GB.
    Speed of the memory is also not a substitute for quantity.
     
  18. Humanoid_1

    Humanoid_1 Guest

    Messages:
    959
    Likes Received:
    66
    GPU:
    MSI RTX 2080 X Trio
    Well actually it is, in some situations.
    Part of the reason there was a race to add more GDDR5 was to increase memory bandwidth, something HBM has in spades without needing excessive quantities of itself to achieve it.

    This allows a single Fury X to run say Sniper Elite III in a 3x 4K screen setup at a near steady 60 fps :)


    Apart from that AMD also made enhancements to its efficient use of what memory these HBM cards are equipped with making them reportedly equal to a 6GB card. Real reviews here on the 24th will tell the truth of course on this part.
     
  19. Kaerar

    Kaerar Guest

    Messages:
    365
    Likes Received:
    48
    GPU:
    5700XT
    That reminds me of the line "there's no replacement for displacement" something that Turbo's have been disputing since their invention, then along came the Wankel Rotary and blew the claim out of the water.

    At the end of the day it's about reduction of bottlenecks. The massive bandwidth of the HBM memory speeds things up immensely, combined with the compression which while claimed = to 6GB card, probably makes it closer to 5GB GDDR5 in reality. Plus even a proper 4K setup will only use about 2-3GB of VRAM if it's written correctly. Lazy dev's not withstanding it should mean the 4GB of HBM will be sufficient for the 4K addicts at least until 120Hz becomes the minimum target, then we may need the next gen Fury and Pascal.

    8K on the horizon also casts a nasty need for larger VRAM, but it won't be mainstream until 2018 at earliest.
     
  20. snip3r_3

    snip3r_3 Guest

    Messages:
    2,981
    Likes Received:
    0
    GPU:
    1070
    I absolutely agree that it is extremely fast. However it is wrong to assume that it has more quantity to it than 4GB. Maxwell also has compression techniques, doesn't mean that both of these cards have "more" memory than what they are equipped with. The main issue here is the same as system ram. The goal is to ultimately store things you will/want to use in a place as close and fast as possible to the processor. I don't care if you have quad channel DDR4, but if you run out of memory due to the application using more than you physically have, then you will have to rely on the pagefile (VMs, large CAD/rendering projects, or lots of Adobe products running). Even with PCI-E SSDs that is still much slower than good old DC DDR3. For graphics cards, it'll have to tap into the system RAM to swap, which is still multitudes slower even with QC DDR4. Streaming textures is a way to solve this, but previous implementations haven't been exactly stellar. It also means that the concurrent amount of unique textures can't be as high vs having more VRAM in the first place.

    As for whether 4GB is enough or not, I think it is extremely close to being insufficient in just a year or two. Take a look at the new E3 showcases, the games launched this year, and you can see that developers are gobbling up large amounts of VRAM now because the consoles have a unified memory architecture (=sometimes wasteful usage). If I were to invest in a relatively expensive card, I'd like it to have more VRAM. 6GB seems to be the safe point as that is around where the cap on consoles are. However, most people that will be buying Fiji cards will probably have a monitor setup that have a higher resolution, with either 4K or surround setups, meaning that the VRAM requirement will also increase. Fiji's direct competitors both have more VRAM, something that will come back to haunt it as games get progressively larger and more advanced.

    The difference is that with the engines, both achieve the same result, power (through acceleration).

    With VRAM/RAM, the quantity caps the number of concurrent items that can be loaded (which will be an issue, but isn't right now). Speed is a completely unrelated to this aspect because GDDR5/HBM are both way faster than what the system RAM can supply (nor PCI-E 3.0 X16). Once you hit the point of saturation with VRAM, the bottleneck is no longer on the graphics card at all.

    A more accurate picture of the situation would be like equipping the fastest engine in the world with a smaller gas tank. Regardless of how fast you go in the beginning, you will have to stop to refuel more often = depending on the length of the race, you might lose. (I'm bad with analogies)
     
    Last edited: Jun 21, 2015

Share This Page