Computex 2015 Exclusive: AMD Fiji GPU Die Photo

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 3, 2015.

  1. Undying

    Undying Ancient Guru

    Messages:
    24,381
    Likes Received:
    11,796
    GPU:
    XFX RX6800XT 16GB
    What is there to optimize, lol? A brute force, ridiculously high bandwidth will help in 4k maxing out with filters. nVidia should start with Pascal as soon as possible or they will be left behind.
     
  2. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Your statements keep altering. While I do not know Fiji's performance, this would be AMD official statement:
    "A new era of PC Gaming. Coming 06.16.15."
    Source: amdradeon - twitter

    So, once more. Bring your statements, sources into proper context. Because even HH is cooking from water in this case. AMD keeps everyone in the dark.
     
  3. Fender178

    Fender178 Ancient Guru

    Messages:
    4,196
    Likes Received:
    213
    GPU:
    GTX 1070 | GTX 1060
    Also another thing that AMD can do to help them out more is to get the custom cooler based cards of the 3xx series out ASAP and not wait like they did with the 290/X based cards. Considering the new Fiji card might not be living up to the hype that it is getting.
     
  4. S_IV

    S_IV Member Guru

    Messages:
    161
    Likes Received:
    1
    GPU:
    GTX Titan X (Maxwell)
    I just hope it won't turn into another Bulldozer, still remember the rumors about Bulldozer and the final product. Will see.
     

  5. Denial

    Denial Ancient Guru

    Messages:
    14,177
    Likes Received:
    4,066
    GPU:
    EVGA RTX 3080
    The problem is that Fiji is already disappointing.

    AMD doesn't need a card that competes with Nvidia, they need a card that one ups Nvidia's offerings. Fiji was expected to do that with HBM. There were rumors of it being 20% faster than a Titan X back in January and everyone was getting all excited. Now there are rumors that it's slower then a 980 Ti. With only 4GB of HBM it's even less impressive. Even if that's fine for the majority of gamers -- there are going to be people that look at that and say no way. Like you're saying it's so much faster but is that really even necessary? There are already some titles that exceed 4GB of memory and even in ones that don't, are we really that strapped for memory bandwidth? Where doubling is going to make that big of a difference?

    Idk, I don't know where AMD goes from here. They obviously don't have the R&D budget to compete with Nvidia anymore. I was hoping that by them taking a risk with HBM they'd be able to top Nvidia, but the card is coming way to late. Nvidia will have 16nmFF+/HBMv2/Pascal cards out by March most likely. So anything AMD stands to gain here will just be lost then. And honestly if the performance is true, what does AMD stand to gain here anyway? Most people will probably just buy the 980Ti -- which is already out. The Fiji card is rumored to have production issues regardless due to the HBM so supply is going to be short for while.

    It's kinda sad -- because AMD is like paving the way for it's own destruction. They basically are doing all the leg work for HBM and Nvidia is going to be the one reaping the benefits with V2.

    Yeah ok, the game companies that optimize so well for PC in the first place are now going to optimize their games for an extremely specific memory bandwidth scenario - that effects what? The 10 people that run 4K? Please.

    There are going to be like one or two titles that are even developed with HBM in mind -- all probably from DICE as Johan has a hard-on for AMD. Aside from that, even if a title starts development now, it won't be out till mid 2016 and Pascal is slated for Q1/2.

    HBM will most likely give some 4K benefits but AMD doesn't need to win some games. They need to win in every game and scenario they can possibly win in.
     
    Last edited: Jun 3, 2015
  6. Mineria

    Mineria Ancient Guru

    Messages:
    5,541
    Likes Received:
    701
    GPU:
    Asus RTX 3080 Ti
    Not denying that HBM is fast, but at the same time 4GB most probably requires some good compression, question is if the GPU is able to handle that @ 4K and Ultra settings and providing 60fps +, since the workload has to moved to compensate when texture load etc. exceeds the onboard memory.
    A single card that can handle such kind of rendering is more than welcome, I highly doubt that we see any such card in the near future though.
     
    Last edited: Jun 3, 2015
  7. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    The benefits of HBM are that it is insanely fast, and has lower power consumption than GDDR5.
    Thats about it.
    The underlying GCN cores are probably the same as Tonga, only x2 and with different memory controllers.
    Nvidia and AMD will come out with their next gen cards at the same time roughly. 16/14nm will come to fruition about the same time for both of their GPU's, just not sure who's doing what process. Likely that Nvidia will be TSMC, and AMD will be GloFo/Samsung.
    This looks to me more like a small run of new r&d tech, that they're actually releasing to the public. A sneak preview of whats to come, as it were.
    I would expect Fiji to last about a year, if that.

    Fijix2 with Win10 RAM Stacking, will be able to survive longer (2x4=8GB), so i would expect most of the chips to end up in watercooled duallie setups (probably for VR setups).

    edit : And no-one is going to optimise games for HBM. HBM may be optimal for certain AA solutions though.
     
  8. Deathchild

    Deathchild Ancient Guru

    Messages:
    3,970
    Likes Received:
    2
    GPU:
    -
    Can't we use an adapter though? ... not sure that would work though but... DP to dual DVI.
     
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    16/14nm is not magical. It may make smaller chips. They may eat 20% less.
    That would allow for what? For more transistor within same TDP and size.

    But sadly price per transistor there is not any better yet. So, redoing 28nm GPU into 16/14nm will make it eat less, but will not cost less.

    Therefore unless architecture changes like Kepler to Maxwell, chips will cost more to deliver more.
     
  10. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    I see the single card holding its own up-to 4K.
    For 4K, I would highly expect the Fiji x2 to be the one to beat.
    Also, think about the size of the GPU a minute.
    Why so big ? With that much memory bandwidth, they probably had to make it so large to use it up. That big at 28nm, and they seem to be needing high GPU clocks to try and use it. Obviously not that efficiently atm, hence the watercooling.
    I would expect they were hoping to be able to do this on 16/14nm, and will do in the not so far off future.
     

  11. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    16/14nm will be mainly very good for the mid-low end, where chips generally can get 2x performance, in the same previous thermal/electrical envelope.

    For the power users in the mainstream-high end, it means more transistors, which should translate to more shaders, much more in this case, since the transition from 28nm to 16/14nm is quite huge, especially with repect to previous generational shrinkages of GPU's over the years.
    The GPU's will still be quite large, just not as large as today.

    With so many more shaders to feed, you need the memory bandwidth.
    With GDDR5, we would have had to go over 512bit to 640 or 768bit.
    Thats really awkward, and would cause huge complex PCB's.

    Thats where HBM comes into town.

    TDP's are likely to stay at ~300W for the high end, mainly because they can use it (if they want/need to).
     
  12. Denial

    Denial Ancient Guru

    Messages:
    14,177
    Likes Received:
    4,066
    GPU:
    EVGA RTX 3080
    Idk, I think that's a pointless waste of AMD's resources if that's the case.

    In my mind the problem is that AMD is on the verge of losing more and more marketshare. And it's not like they haven't been competing, they have, they just haven't been winning anything. Like the whole 30%-70% AMD-Nvidia marketshare thing didn't happen overnight. Nvidia has been slowly gaining since the R600 days. And everyone can talk about how great all of AMD's altruistic practices with their technology is. But the bottom line is AMD is the company that analysts are predicting to fail by 2020, not Nvidia. So obviously Nvidia is being managed correctly.

    As for HBM and Fiji -- 6-7 months ago there were dozen of posts on here talking about how Fiji/HBM was going to be AMD's return to form. That it would give AMD a 1 year advantage over Nvidia and allow them to regain some of their lost market. And now it's turned into a 'proof of concept' type thing. Even if it does beat the Ti/Titan X @ 4K, the vast majority of the market won't care, because they aren't running 4K yet. And even if it does capture the high end market -- that isn't where the money is.

    I imagine that AMD has spend a great sum of money investing in HBM and making it work and it seems like it's all for naught, in terms of a ROI for them.
     
  13. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    Who's to say other chips couldn't be put on the interposer as well ?

    In the old Xbox Chip, there was some eDRAM and logic to offload virtually all of the AA and Z-buffer tasks iirc.

    I wonder if some stuff could be externalised to a small chip, like the UVD/VCE or something.
     
  14. Toss3

    Toss3 Master Guru

    Messages:
    202
    Likes Received:
    17
    GPU:
    WC Inno3D GTX 980 TI
    Not sure if CF/SLI is suitable for VR setups considering the added frame latency. :3eyes:
     
  15. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You can do that, easy. But initially it was thought for CPU to have different manufacturing process than GPU and then gluing it together with interposer.

    Moving out UVD/VCE would be bad idea, waste of power, additional complexity.

    And btw, my previous post was meant to explain that one should not expect from 16/14nm more performance at same price as 28nm does.
    Unless there is next architectural improvement, those GPUs will cost more to deliver more.

    And you were right that it will be good for mobile in general.
     

  16. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    AMD won all the consoles so far, there's that, and it's long-term.
    You've got to give them that at least.

    Analysts are mainly ANALysts, and should probably not exist.
    They are their own point of view of whats going to happen.
    It's up to the users brain to make their own judgements based on this.
    I would only trust analysts if I knew almost everything about all of them.
    Nvidia is a GPU/Graphics company.
    AMD is a lot more, and has a lot more on its plate atm.

    Absolutely right that Fiji was supposed to be AMD's return to form, be it architecturally, or in the performance crown.
    We haven't got anything but "rumours" yet. It could be bull****. I would be the first to spread that it was being polished, and we were seeing what we were doing to go against the 980Ti....x2 or x3 the perf ? (Joke :))

    Whats the most important, is that the perf is there when people can get the cards, and put their money down for them.
    Until then, every rumour is just that. Best to be seen climbing up from dire predicitions, than trying to live up to the hype produced. Being seen as overperforming rather than underperforming.

    You're also right that 4K is pointless to optimize for atm.
    Only a handful of people have 4K, but quite a few DO have tri-monitor setups, and would very much like to be able to turn up the candy, maybe even with a single card. Not sure whether 4GB would be limiting for them, but it's all they've had until now....

    AMD has invested a lot in HBM, yes. It will also be makin an appearance on the next Opteron APU's, and server parts, and many graphics cards to come, as HBM2, HBM3,....I don't know if they would be able to get royalties or anything though, whether it will benefit them financialy for having helped develop it.
     
    Last edited: Jun 3, 2015
  17. Mineria

    Mineria Ancient Guru

    Messages:
    5,541
    Likes Received:
    701
    GPU:
    Asus RTX 3080 Ti
    x2 would mean dual GPU, which as SLI and CF has it's up and downs, it often comes down to if game developers support it properly.
    If they really want to bring a winner it has to be one GPU that rocks them all at 4K.
    I want to buy either a Fiji or a 980Ti soon to max out at 1440p.
    Another GTX970 might be the cheapest choice, but again, SLI and none the less building up more heat.
    Like most people, I prefer it more simple and working with every game by having a single GPU.
     
  18. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    They seem to be touting one-GPU-per-eye rendering.
    One for left, one for right, and syncing them up.
    Should reduce any tearing, and make it a lot easier to balance the load.
     
  19. Denial

    Denial Ancient Guru

    Messages:
    14,177
    Likes Received:
    4,066
    GPU:
    EVGA RTX 3080
    It doesn't do AFR, it renders to each eye separately. Nvidia already does this with VR SLI, I'm sure AMD has something similar.
     
  20. zer0_c0ol

    zer0_c0ol Ancient Guru

    Messages:
    2,976
    Likes Received:
    0
    GPU:
    FuryX cf
    http:// wccftech .com /amd-hbm-fury-x-fastest-world/

    if true it will be awesome
     

Share This Page