New Upcoming ATI/AMD GPU's Thread: Leaks, Hopes & Aftermarket GPU's

Discussion in 'Videocards - AMD Radeon' started by OnnA, Jul 9, 2016.

  1. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    AMD Radeon PRO WX 8200

    This is RX Vega 56 Pro series GPU
    WX9100 (14nm Vega uArch 4096 SPs)
    WX8200 (14nm Vega uArch 3584 SPs)
    WX7100 (14nm Polaris uArch 2304 SPs)

    [​IMG]
     
  2. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    AMD Introduces Radeon™ Pro WX 8200 at SIGGRAPH 2018: Delivers World’s Best Workstation Graphics Performance for Under $1,000

    — AMD advances the field of VFX with Vancouver Film School collaboration and unveils powerful new workstation technologies for creative professionals, including new plugin support for Radeon ProRender —

    VANCOUVER, British Columbia, Aug. 12, 2018 (GLOBE NEWSWIRE) — SIGGRAPH — AMD (NASDAQ: AMD) today announced a high-performance addition to the Radeon™ Pro WX workstation graphics lineup with the AMD Radeon™ Pro WX 8200 graphics card, delivering the world’s best workstation graphics performance for under $1,000i for real-time visualization, virtual reality (VR) and photorealistic rendering. AMD also unveiled major updates to Radeon™ ProRender and a new alliance with the Vancouver Film School, enabling the next-generation of creators to realize their VFX visions through the power of Radeon™ Pro graphics.

    The new turbocharged AMD Radeon™ Pro WX 8200 graphics card allows professionals to effortlessly accelerate design and rendering. It is the ideal graphics card for design and manufacturing, media and entertainment, and architecture, engineering and construction (AEC) workloads at all stages of product development.

    “Professionals can fully unleash their creativity with the ‘Vega’ architectureii at the heart of the Radeon™ Pro WX 8200 graphics card,” said Ogi Brkic, General Manager of Radeon Pro, AMD. “This powerful new workstation graphics card empowers creators to improve collaboration among remote teams with VR, create exciting new cinematic experiences and visualize their creations with ease, all at an incredible price point.”

    Based on the advanced “Vega” GPU architecture with the 14nm FinFET process, the Radeon™ Pro WX 8200 graphics card offers the performance required to drive increasingly large and complex models through the entire design visualization pipeline. With planned certifications for many of today’s most popular applications – including Adobe® CC, Dassault Systemes® SOLIDWORKS®, Autodesk® 3ds Max®, Revit®, among others – the Radeon™ Pro WX 8200 graphics card is ideal for workloads such as real-time visualization, physically-based rendering and VR.

    Advanced Feature Set

    The Radeon™ Pro WX 8200 graphics card is equipped with advanced features and technologies geared towards professionals, including:

    • High Bandwidth Cache Controller (HBCC): The Radeon™ Pro WX 8200 graphics card’s state-of-the-art memory system removes the capacity limitations of traditional GPU memory, letting creators and designers work with much larger, more detailed models and assets in real time.
    • Enhanced Pixel Engine: The “Vega” GPU architecture’s enhanced pixel engine lets creators build more complex worlds without worrying about GPU limitations, increasing efficiency by batching related work into the GPU’s local cache to process them simultaneously. New “shade once” technology ensures only pixels visible in the final scene are shaded.
    • Error Correcting Code (ECC) Memoryiii: Helps guarantee the accuracy of computations by correcting any single or double-bit error resulting from naturally occurring background radiation.
    The Radeon™ Pro WX 8200 graphics card also features a dedicated AMD Secure Processoriv, which carves out a virtual “secure world” in the GPU. IP-sensitive tasks are run on the AMD Secure Processor, protecting the processing and storage of sensitive data and trusted applications. It also secures the integrity and confidentiality of key resources, such as the user interface and service provider assets.

    The Radeon™ Pro WX 8200 graphics card will be available for pre-order at Newegg on August 13, with on-shelf availability expected in early September and an SEP of $999 USD. Radeon Pro WX Series graphics cards come equipped with the Radeon™ Pro Software for Enterprise Driver – according to QA Consultants, the “most stable driver in the industry”v – as well as a three-year limited warranty and optional seven-year limited warranty on retail versions. For more information on the latest Radeon Pro™ Software for Enterprise 18.Q3, please visit here.

    Radeon ProRender
    AMD also introduced new plug-ins for Radeon™ ProRender, AMD’s high-performance physically-based rendering engine that enables CAD designers and 3D artists to create renders quickly and easily. Users now have free access to new plug-ins for:

    • PTC Creo: Enables designers and engineers to quickly and easily create incredibly rendered visualizations of their products and is available now in beta.
    • Pixar USD viewport: For developers building a USD Hydra viewport for their application, the new USD plug-in available on GitHub adds path-traced rendering for accurate viewport previews.
    New features and updates have also been added to existing plug-ins, including support for Autodesk® 3ds Max 2019, camera motion blur and many more.

    Supporting the next generation of creators at Vancouver Film School
    AMD also announced a new alliance with The Vancouver Film School (VFS) to open a brand-new tech innovation lab and hub for Vancouver’s professional VFX community. Powered by Radeon™ Pro and Ryzen™ technologies, the AMD Creators Lab will inspire the creative tech community and advance the field of VFX, video game design, and virtual and augmented reality development.

    Built in the heart of the VFS downtown campus and adjacent to the city’s digital production and developer hub, the lab will offer an open working space for students, artists and computer graphics professionals to discover and create using the latest industry-leading technology. The AMD Creators Lab features powerful AMD-based workstations, delivering outstanding performance to shorten load and rendering times, empowering students and professionals to pursue their wildest visions and create without technological restraints.

    Showcasing the future of graphics technologies at SIGGRAPH
    Along with today’s Radeon™ Pro announcements, at SIGGRAPH AMD will also highlight the 2nd gen AMD Ryzen™ Threadripper™ desktop processors, designed for professional content creators, developers, gamers and hardware enthusiasts. AMD will showcase the 2nd gen AMD Ryzen™ Threadripper™ desktop processors alongside a range of advanced technology demonstrations on the SIGGRAPH show floor at Booth #1101, including:

    • AI Rendering: Machine learning with AMD’s ROCm and Radeon™ Pro WX Series GPUs can slash rendering times without sacrificing quality.
    • Real-time, Viewport Raytracing: Next-generation application viewport technology brings real-time ray-tracing quality directly into the editing windows of DCC and CAD applications.
    • Cloud ProRender: AMD Radeon™ ProRender users can expand their rendering capacity and horsepower by rendering in the cloud.
    • PIX on Windows from Microsoft®: PIX is a performance tuning and debugging tool for developers for analyzing DirectX® 12 games on Windows.
    In addition, Blackmagic Design will showcase its new high-performance eGPU at Booth #1417, featuring a built-in AMD Radeon™ Pro 580 graphics card.
    Designed in collaboration with Apple and made for the Apple® MacBook Pro®, Blackmagic eGPU is optimized for professional video and graphics, such as those used in DaVinci Resolve software, 3D gaming and VR packages.

    -> https://videocardz.com/press-release/amd-introduces-radeoon-pro-wx-8200-for-999-usd
     
  3. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,961
    GPU:
    XFX 7900XTX M'310
    Gen.2 HBM2 memory too via Hynix it seems.

    https://www.reddit.com/r/Amd/comments/96qa4k/amd_announces_radeon_pro_wx_8200_pro_vega_for/

    One of the comments in that thread.
    Not bad, though this is a workstation card and not a desktop card and the GPU is still using 14nm so probably not too different from current Vega GPU's but it's a nice little improvement. :)
     
  4. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    Last edited: Aug 13, 2018

  5. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,961
    GPU:
    XFX 7900XTX M'310
    Unfortunately while the initial batch of Sapphire Vega 56 Pulse GPU's used Samsung memory the later shipments have been a mix of Hynix and Samsung with mainly Hynix now with Samsung prioritized for the Vega 64's as I recall.

    The bios flashing for Hynix is not recommended since the bios you flash from (For Pulse that's usually Vega 64 Nitro+ non-LE version.) has data for the Samsung set and these don't match properly so there's no easy way to increase voltage but even with higher voltage the max is going to be around 900 - 950 Mhz.

    Unlike with Samsung HBM2 memory where the max is around 1100 - 1150 though it varies a bit but the GPU scales up to at least 1100 Mhz as long as the core clocks also keep up around 1600 - 1650 Mhz or so from my reading and what I've been looking into so far for Vega GPU's and their way of handling clock speeds plus just how well the core and memory scales at higher speeds. :)

    So I'm a bit capped in overall performance gains but in turn I can lower voltage down even further and keep clock speeds around 1450 - 1500 Mhz at 1050Mv or possibly even lower for a nice reduction in mainly heat but the reduced power draw isn't bad.
    Performance difference in benchmarks is going to be up to around 3 - 4% give or take depending on test but most games will be 2% or less really so cutting the default voltage and draw down to 200w or even lower is a pretty good win really.


    Vega 64 either the Nitro or the stock with water cooling is of course going to excel though and the binned GPU's for the water cooled model in particular also scale nicely at lowered voltage while retaining the higher core speeds without throttling allowing for the best of both so they're going to be around 7 - 10% faster or so I believe depending on game, above 1100 Mhz on HBM2 the gains start dropping a bit though so if the GPU core clocks here are similar at 1600Mhz or so or possibly even lower if it's a air cooled GPU plus blower fan then it might not benefit as much from having higher clocked RAM but it'll depend on workload and tasks, memory could still be really important for a few programs and a couple of games could perhaps also benefit even if core clocks aren't as high as they could be.

    Still have more to learn about how all this works though, overall GPU wattage should be around 150 - 180w for 200 - 220w total draw I think but depending on boosts and clock speeds and GPU load and demand this will fluctuate a bit but it's still better than stock with a minimal performance impact and easier to cool effectively. Hitting around 60 - 70 degrees Celsius now depending on game which isn't bad, could be better but still good results for the GPU temperature. :)

    I do keep GPU voltage a bit higher than probably needed just in case it spikes though so to keep things stable if anything happens, memory is mainly kept at 850 Mhz for now which is still a good gain but doesn't stress it as much as if I were to go for the cap at around 900 Mhz on this bios and particular GPU allowing for a small boost above the default 410 GB/s bandwidth. (480 GB/s for Vega 64)

    Performance gain from memory alone at 800 to 1050 Mhz seems to land around 6% or so but it was tested with a Vega 64 GPU (1650Mhz core clock.) though it means memory isn't the de-facto bottleneck on the GPU but it's still a nice boost if it can be maintained.

    Meaning it's unlikely to be the performance stopper since the resulting gains weren't that high from how large of a boost to memory speed the overclock had though it's still a gain but at 2% gain for every 10% memory overclock that's not the best returns but it's also free performance since voltage remains the same being capped at 1.35v and if it's stable without throttling due to thermals or such for timings.
    (Which is also where the water cooled model of Vega can really help ensure all components remain sufficiently cooled to avoid anything like that.)



    EDIT: Front loaded I believe is what it was called, it's the shader cores or clusters not being able to feed the GPU or not being utilized in full depending on workload plus while improved the card still has some geometry performance issues but it's not as bad with ROPS's and texture units and such as was assumed when the card launched. :)


    And little by little the drivers have kept up performance increases at 1 - 2% here and there over time and it adds up a bit too but unless the game was totally borked don't expect too much from just the driver but it adds up and together with keeping the GPU clocks high it's a nice overall boost in performance.


    EDIT: Though I still have more to understand and try to learn about how the GPU works and what it's limits are and where it's running into problems but well I doubt I'll understand everything but it's a interesting card so far and a nice piece of hardware.

    Will be interesting to see what Navi can do from here if they can get that launched in the first half of 2019 or so maybe to compete a bit with NVIDIA but they might still be faster with the 1180 or what the next series will be called and that's going to land them a few months as having the undisputed fastest cards on the market.
    (And if they slash prices a bit with Pascal and if the stock has a excess of GPU's that could also limit AMD a bit with the 1070 and up giving Vega competition.)


    But for now it's all just speculation.
     
    Last edited: Aug 13, 2018
    Dekaohtoura and OnnA like this.
  6. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    You need to remember also that with UV you can lower the V spikes ;)
    Also when UV (It needs to be done) you can end up with best Power Efficient GPU in the world with Vega64 (1440p Ultra Gaming in mind).
    Taken Forza H3 as an example, i have ~64-69tW w/CPU lol (Played #Forzathon yesterday, tested my new RAM setting with 1T + GD CL14-15-16-15 35-52)
    All Ultra + MSAAx4 & FXAA constant 70FPS (no single dip :D) @ 1717/1150 [1.087v &nfiniteFabric at 0.962v] but HWinfo shows ~1550Mhz Top.

    but when one look at WWW tests then it can be confusing somehow
    -> You see 250-340tW !!! lol which is simple Not TRUE at all :D
    But then again we are PC Enthusiasts not console Players (I meant Plug&Play)
    So we can & we will Tweak our H/W if we can :rolleyes: and with Vega uArch it is strongly recomended to give it an proper UV.

    Edit.
    Also played some Diablo III today (Season 14) All Maxed AA + Reshade 3.4
    HWinfo shows 1220MHz ! -> Yes, it's only that it takes to enjoy D3 @ 1440p Ultra (don't ask me about tW cuz' it's too low to mention ;) )

    Also when you look at FreeSync monitors sales at CaseKing or Mindfactory then you'll see that many ppl have Vega 64/56 & they are happy with it, along with FreeSync it makes Perfect Gaming combo of the decade.
     
    Last edited: Aug 13, 2018
  7. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    Hmm, im just wondering How many Watts do you need when GDDR6 8GB or 16GB is used?
    Acording to some Samsung specs for 1GB is around 4W to even 10W (faster piece)
    So when LowerEnd Navi comes with GDDR6 (i hope it will get HBM) it will consume additional 48W or 96W for 16GB Model (taken average of 6W)
    IMO HBM_2 is the Only Future....

    PS.
    You need to remember that GPU GDDR6 are high v pieces ready for OC ;)
    So that numbers can get even higher.

    8GB HBM_2 takes ONLY ~18-24tW When OCed to Brutal 1200MHz ~630GB/s :eek:
     
  8. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
  9. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    Side note, what is Radeon Rays?
    Yes ATI was first to innovate this tech.

    AMD had this technology on it's Radeon Pro cards before every other company and it was known as "AMD FireRays".
    Now it's now called "Radeon Rays".
    The $999 Radeon Pro WX 8200 has this technology, and it's most affordable Vega Pro on the market.

    Radeon™ Rays (formerly AMD FireRays) is a high efficiency, high performance GPU accelerated ray tracing software.
    By tracing the paths of light rays moving through a movie or game scene,
    Radeon Rays simulates the effects of light rays reflecting and refracting through an environment and interacting with virtual objects – for stunningly photorealistic 3D images.

    -> https://pro.radeon.com/en/software/radeon-rays/
     
  10. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV

  11. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    Editorial about RayTraced Gaming :D

    'All' my Arts are made in 3DsMax + Photoshop (with grain of Illustrator & some Quarked Prints for Friends)
    So i know Raytracing first hand....

    1. How many Traced paths will it introduce?
    a. 1 path is a meh (we have pre-baked 1 path already in Games) [see GstRender.PostProcessQuality 3 in BF1 and look at water pools]
    b. 2 paths? nahh it will be better but it's (Visually) not enough IMO
    c. so we have 3 paths, yay ! Yes 3 paths is OK for all games [As an example i'll give you this: You can see Guy in the 2nd. mirror in scene containing: Guy, and 2 mirrors, siluette is hiden and only visible through mirror reflection]

    2. What GPU will beneeded for such a task? (I mean Raytrace not some imitation gimmick)
    a. Vega 64 at least with Asynced OCL Raytrace calculation (we can get ~60FPS in 1440p with Perfect Optimisation in DX12 or VLK -> Yup :cool:)
    b. DX11 needs to go, when Big boys playing Raytrace.

    Yes we have Radiosity, and Refractions + Reflections which will add compliction to the scene (almost in Geometric scale)
    IMO, Raytraced Gaming is not here for sure, We need to look at DX12.1 & VLK to see what kind of Raytrace it will be (99% sure it is not what we have in 3DsMax or similar 3D softwarre)

    --
    UPD.

    -> https://raytracey.blogspot.com/2010/04/comparing-path-tracing-image-quality.html

    Real Raytrace:
    4K = 8,294,400 pixels
    If we targeting minimum of 30 frames per second then
    248,832,000 pixels per second to illuminate so 6 G.ray = 24 samples/pixel

    --
    Wikipedia:
    “CGI for films is usually rendered at about 1.4–6 megapixels. Toy Story, for example, was rendered at 1536 × 922. The time to render one frame is typically around 2–3 hours, with ten times that for the most complex scenes. This time hasn't changed much in the last decade, as image quality progressed at the same rate as improvements in hardware."
     
    Last edited: Aug 21, 2018
  12. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    Last edited: Aug 21, 2018
  13. Truder

    Truder Ancient Guru

    Messages:
    2,400
    Likes Received:
    1,430
    GPU:
    RX 6700XT Nitro+
    I think the ray tracing we'll be seeing implemented soon is a "simplified" solution, such as low sample of rays cast at half-precision and then using a noise filter to clean up the image.

    With this being the case then I can see GCN already being capable at it using asynchronous compute and certainly more so with Vega having double rate FP16 and also remember, GCN supports tensorflow - so similar calculations can be performed on AMD hardware (how effective it is I don't know but I've seen reports that the RX 480 has the same output as a 980 ti in tensor workflow).

    The big news front from NVidia about their RTX hardware is mostly just the software package they've gained from their use of AI development to refine denoising software, which in their terms will be hardware+gameworks software package which will obfuscate and segment the market as we've seen in the past with PhysX and other gameworks libraries.

    Personally, I reckon NVidia are making a powerplay with Turing to try and seize control of ray traced augmented rendering standards.
     
  14. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    I have one word :D
    2005 F.E.A.R. (I have ATI 1950 Pro 512GB back then)
    Those Volumetric Lightning & Shadows + Caustics where fenomenal.

    As you can see, we have Good Tech.
    Now enter Today game and show me similar efect? :p
    Where is it? Why it isn't here any more? Questions...

     
    Last edited: Aug 21, 2018
  15. z8373767

    z8373767 Master Guru

    Messages:
    491
    Likes Received:
    253
    GPU:
    6900XT/8650G+7970M
    This effects is very common in modern title:
    BF1, RE7, Evil Within, Hunt: Showdown, Witcher 3 (remember Magic Lantern in Towerful of Mice?)
    Better question is: "where is F.E.A.R physics and AI?"
    I mean when you fight with enemy it was very effective, flying paper from desk, hot steam from bullets in wall (Max Payne 3 bring it to next level) and enemies "think" on battlefield.
    I had silly situation once, in F.E.A.R of course. I hide in small room with one exit, so enemies rush this entry and throw grenade on me. I tried to escape and they just killed me xD

    We have 6-12 threads in CPU. Devs, Use this for physics and AI, not Denuvo or WMProtect :/
     
    OnnA likes this.

  16. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    -> at 4.09 you'll have 2xVega Pro DaVinci Edit :D

     
  17. metagamer

    metagamer Ancient Guru

    Messages:
    2,596
    Likes Received:
    1,165
    GPU:
    Asus Dual 4070 OC
    You're joking, right? BTW, "2005 F.E.A.R." is not one word

    Also about shadows like that, are you telling me you haven't seen them in any other game?
     
  18. OnnA

    OnnA Ancient Guru

    Messages:
    17,963
    Likes Received:
    6,827
    GPU:
    TiTan RTX Ampere UV
    What i meant is no real progress in gaming from a long time.
    Both GFX & AI side of the story.

    5-6 years now and games still looking similar to Crysis 3 ;)
    You catch my drift :oops:
     
    Last edited: Aug 27, 2018
  19. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Sad answer is that F.E.A.R. used Havok which proven to be very successful. Intel seeing that bought them, but they did can it. In 2015 M$ got it instead. Not sure how much they are using it and what remains of it, but they are actively developing it.
    For today, F.E.A.R. missed only texture resolution to be looking reasonably well. And story wise... that was way fps games are meant to be.
     
  20. densou

    densou Member Guru

    Messages:
    160
    Likes Received:
    18
    GPU:
    XLR8 4060ti 16GB
    c'm guys, F.E.A.R. had something WAY more important (and nearly forgotten by devs nowadays, perhaps cos of mean/evil publishers' control over 'em :p): terrifying [by ALL means] A-U-D-I-O
     
    z8373767 likes this.

Share This Page