1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

New Upcoming ATI/AMD GPU's Thread: Leaks, Hopes & Aftermarket GPU's

Discussion in 'Videocards - AMD Radeon' started by OnnA, Jul 9, 2016.

Thread Status:
Not open for further replies.
  1. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    AMD's new semi-custom chip: Ryzen/Vega tech for PC/consoles

    We already know that AMD is heavily committed to the semi-custom market with Sony working hand-in-hand with RTG on the development of Navi for their next-gen PlayStation 5 console, but now it's China's time in the spotlight with a Ryzen/Vega SoC in development.
    A new semi-custom system-on-chip (SoC) was announced today that packs a quad-core Ryzen CPU and Vega GPU with 24 stream processors, all packed with 8GB of GDDR5 memory.

    AMD is working with Zhongshan Subor on the new chip, something that will be powering a "new gaming PC and upcoming console" in China.
    Inside, teh new SoC packs a 4C/8T Ryzen CPU at 3GHz, and 24 Vega CUs at 1.3GHz joined by the 8GB of GDDR5.

    We should expect Radeon RX 580 level performance, which is fine for 1080p and 1440p gaming at 60FPS depending on the game and detail levels used.
    The more interesting side here is that the Vega GPU technology is using GDDR5 memory and not HBM2 like AMD originally said when they announced the Vega GPU architecture.

    AMD explains: "Designing a semi-custom gaming SoC for Subor represents an exciting opportunity for AMD to make our high-performance technologies even more accessible to gamers in China.
    The new SoC is also a great example of our semi-custom strategy, where we take our differentiated IP and tailor to meet the specific needs of a customer to create a product only AMD can deliver".

    At the time, AMD said that Vega was designed for use with HBM2 and now we're seeing full SoC designs that are powered with GDDR5. This begs the question to why we aren't seeing GDDR5-based Radeon RX Vega graphics cards, but we know the answer to that.

    When AMD hit the rough roads with Radeon RX Vega, they pretty much killed it - chopped up the Vega CUs and gave them to Intel for their Hades Canyon NUC device.
    But Intel's faster NUC being powered by Radeon RX Vega graphics was still using HBM2 technology... not GDDR5.
    AMD CEO Lisa Su pushed the company towards a full semi-custom powerhouse, and now we're seeing the continued fruits of that labor.
    Vega with GDDR5 is another interesting turn in the eventful life of RTG and AMD as a whole.

    [​IMG]
     
  2. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    BenQ EX3203R 32-inch FreeSync 2 Curved HDR monitor releases this week

    BenQ has confirmed that their new EX3203R FreeSync 2 display will release on August 7th, offering a large display size of 32-inches and a resolution of 1440p, delivering a great balance between size and resolution while giving a gamer-oriented 144Hz maximum refresh rate.

    As a FreeSync 2 monitor, the EX3203R offers support for HDR and FreeSync with Low Framerate Compensation, giving it an effective variable refresh rate window of 30-144Hz. HDR-wise, the EX3203R has a VA display panel with a peak brightness of 400 cd/m^2 while offering 90% coverage of the DCI-P3 colour space.
    BenQ's specifications make no specific mention of Local Dimming, giving this display limited HDR support, though the maximum brightness is over 33% higher than the 300 cd/m^2 offers by most non-HDR screens.

    While the display is certified as a DisplayHDR 400 monitor, AMD has confirmed that their FreeSync 2 specification has a "higher bar" for colour gamut, maximum screen brightness and contrast ratios, with many of FreeSync 2's HDR requirements being similar DisplayHDR 600.
    AMD's FreeSync 2 specifications also set strict limits on display latencies and other factors that are not covered by DisplayHDR.

    BenQ's EX3203R will support VESA compatible mounting system using a "VESA wall mount transfer kit‎", with the display's stock mounts offering tilt and height adjustment functions.
    The display itself has a 1800mm radius curvature, with the screen being a VR-type panel. Input-wise this monitor has a single Displayport 1.2 port, dual HDMI 1.4 inputs, a single USB 3.0 input and dual USB 3.0 outputs.
    The EX3203R supports USB Type-C as a display input, which will be a useful feature for Mac users.

    [​IMG]
     
  3. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
  4. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Pixio Launches New Monitor With Pre-Order Special $419 Direct Sales

    The PX329 was originally supposed to carry an MSRP of $549.99 but has been dropped to $449.99 though from now until August 24th direct purchases from their site are $419.99.
    Other sites like Amazon have it listed for $499 as well as Newegg for $449.99.
    We previously looked at the PX277 (Which is now going for $399) and our reviewer Keith had called that one a great value from a new brand and commented on the high quality of the panels used.

    The monitor features two HDMI 2.0 ports, two DisplayPort 1.2 and an audio out, FreeSync range is 48-144Hz *and can be OC'ed Via CRU.

    -> https://www.pixiogaming.com/px329

    Source -> https://wccftech.com/pixio-px329-pre-orders-live-32-inch-1440p-monitor-165hz-va-panel-for-419/

    * My Note
    Maby, we can go to 30/35-144/165 (IMO It's possible, because it needs quality LED for such a refresh rate)
     
    Last edited: Aug 8, 2018

  5. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    AMD Radeon PRO WX 8200

    This is RX Vega 56 Pro series GPU
    WX9100 (14nm Vega uArch 4096 SPs)
    WX8200 (14nm Vega uArch 3584 SPs)
    WX7100 (14nm Polaris uArch 2304 SPs)

    [​IMG]
     
  6. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    AMD Introduces Radeon™ Pro WX 8200 at SIGGRAPH 2018: Delivers World’s Best Workstation Graphics Performance for Under $1,000

    — AMD advances the field of VFX with Vancouver Film School collaboration and unveils powerful new workstation technologies for creative professionals, including new plugin support for Radeon ProRender —

    VANCOUVER, British Columbia, Aug. 12, 2018 (GLOBE NEWSWIRE) — SIGGRAPH — AMD (NASDAQ: AMD) today announced a high-performance addition to the Radeon™ Pro WX workstation graphics lineup with the AMD Radeon™ Pro WX 8200 graphics card, delivering the world’s best workstation graphics performance for under $1,000i for real-time visualization, virtual reality (VR) and photorealistic rendering. AMD also unveiled major updates to Radeon™ ProRender and a new alliance with the Vancouver Film School, enabling the next-generation of creators to realize their VFX visions through the power of Radeon™ Pro graphics.

    The new turbocharged AMD Radeon™ Pro WX 8200 graphics card allows professionals to effortlessly accelerate design and rendering. It is the ideal graphics card for design and manufacturing, media and entertainment, and architecture, engineering and construction (AEC) workloads at all stages of product development.

    “Professionals can fully unleash their creativity with the ‘Vega’ architectureii at the heart of the Radeon™ Pro WX 8200 graphics card,” said Ogi Brkic, General Manager of Radeon Pro, AMD. “This powerful new workstation graphics card empowers creators to improve collaboration among remote teams with VR, create exciting new cinematic experiences and visualize their creations with ease, all at an incredible price point.”

    Based on the advanced “Vega” GPU architecture with the 14nm FinFET process, the Radeon™ Pro WX 8200 graphics card offers the performance required to drive increasingly large and complex models through the entire design visualization pipeline. With planned certifications for many of today’s most popular applications – including Adobe® CC, Dassault Systemes® SOLIDWORKS®, Autodesk® 3ds Max®, Revit®, among others – the Radeon™ Pro WX 8200 graphics card is ideal for workloads such as real-time visualization, physically-based rendering and VR.

    Advanced Feature Set

    The Radeon™ Pro WX 8200 graphics card is equipped with advanced features and technologies geared towards professionals, including:

    • High Bandwidth Cache Controller (HBCC): The Radeon™ Pro WX 8200 graphics card’s state-of-the-art memory system removes the capacity limitations of traditional GPU memory, letting creators and designers work with much larger, more detailed models and assets in real time.
    • Enhanced Pixel Engine: The “Vega” GPU architecture’s enhanced pixel engine lets creators build more complex worlds without worrying about GPU limitations, increasing efficiency by batching related work into the GPU’s local cache to process them simultaneously. New “shade once” technology ensures only pixels visible in the final scene are shaded.
    • Error Correcting Code (ECC) Memoryiii: Helps guarantee the accuracy of computations by correcting any single or double-bit error resulting from naturally occurring background radiation.
    The Radeon™ Pro WX 8200 graphics card also features a dedicated AMD Secure Processoriv, which carves out a virtual “secure world” in the GPU. IP-sensitive tasks are run on the AMD Secure Processor, protecting the processing and storage of sensitive data and trusted applications. It also secures the integrity and confidentiality of key resources, such as the user interface and service provider assets.

    The Radeon™ Pro WX 8200 graphics card will be available for pre-order at Newegg on August 13, with on-shelf availability expected in early September and an SEP of $999 USD. Radeon Pro WX Series graphics cards come equipped with the Radeon™ Pro Software for Enterprise Driver – according to QA Consultants, the “most stable driver in the industry”v – as well as a three-year limited warranty and optional seven-year limited warranty on retail versions. For more information on the latest Radeon Pro™ Software for Enterprise 18.Q3, please visit here.

    Radeon ProRender
    AMD also introduced new plug-ins for Radeon™ ProRender, AMD’s high-performance physically-based rendering engine that enables CAD designers and 3D artists to create renders quickly and easily. Users now have free access to new plug-ins for:

    • PTC Creo: Enables designers and engineers to quickly and easily create incredibly rendered visualizations of their products and is available now in beta.
    • Pixar USD viewport: For developers building a USD Hydra viewport for their application, the new USD plug-in available on GitHub adds path-traced rendering for accurate viewport previews.
    New features and updates have also been added to existing plug-ins, including support for Autodesk® 3ds Max 2019, camera motion blur and many more.

    Supporting the next generation of creators at Vancouver Film School
    AMD also announced a new alliance with The Vancouver Film School (VFS) to open a brand-new tech innovation lab and hub for Vancouver’s professional VFX community. Powered by Radeon™ Pro and Ryzen™ technologies, the AMD Creators Lab will inspire the creative tech community and advance the field of VFX, video game design, and virtual and augmented reality development.

    Built in the heart of the VFS downtown campus and adjacent to the city’s digital production and developer hub, the lab will offer an open working space for students, artists and computer graphics professionals to discover and create using the latest industry-leading technology. The AMD Creators Lab features powerful AMD-based workstations, delivering outstanding performance to shorten load and rendering times, empowering students and professionals to pursue their wildest visions and create without technological restraints.

    Showcasing the future of graphics technologies at SIGGRAPH
    Along with today’s Radeon™ Pro announcements, at SIGGRAPH AMD will also highlight the 2nd gen AMD Ryzen™ Threadripper™ desktop processors, designed for professional content creators, developers, gamers and hardware enthusiasts. AMD will showcase the 2nd gen AMD Ryzen™ Threadripper™ desktop processors alongside a range of advanced technology demonstrations on the SIGGRAPH show floor at Booth #1101, including:

    • AI Rendering: Machine learning with AMD’s ROCm and Radeon™ Pro WX Series GPUs can slash rendering times without sacrificing quality.
    • Real-time, Viewport Raytracing: Next-generation application viewport technology brings real-time ray-tracing quality directly into the editing windows of DCC and CAD applications.
    • Cloud ProRender: AMD Radeon™ ProRender users can expand their rendering capacity and horsepower by rendering in the cloud.
    • PIX on Windows from Microsoft®: PIX is a performance tuning and debugging tool for developers for analyzing DirectX® 12 games on Windows.
    In addition, Blackmagic Design will showcase its new high-performance eGPU at Booth #1417, featuring a built-in AMD Radeon™ Pro 580 graphics card.
    Designed in collaboration with Apple and made for the Apple® MacBook Pro®, Blackmagic eGPU is optimized for professional video and graphics, such as those used in DaVinci Resolve software, 3D gaming and VR packages.

    -> https://videocardz.com/press-release/amd-introduces-radeoon-pro-wx-8200-for-999-usd
     
  7. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    15,630
    Likes Received:
    1,624
    GPU:
    Sapphire 5700XT P.
    Gen.2 HBM2 memory too via Hynix it seems.

    https://www.reddit.com/r/Amd/comments/96qa4k/amd_announces_radeon_pro_wx_8200_pro_vega_for/

    One of the comments in that thread.
    Not bad, though this is a workstation card and not a desktop card and the GPU is still using 14nm so probably not too different from current Vega GPU's but it's a nice little improvement. :)
     
  8. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Last edited: Aug 13, 2018
  9. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    15,630
    Likes Received:
    1,624
    GPU:
    Sapphire 5700XT P.
    Unfortunately while the initial batch of Sapphire Vega 56 Pulse GPU's used Samsung memory the later shipments have been a mix of Hynix and Samsung with mainly Hynix now with Samsung prioritized for the Vega 64's as I recall.

    The bios flashing for Hynix is not recommended since the bios you flash from (For Pulse that's usually Vega 64 Nitro+ non-LE version.) has data for the Samsung set and these don't match properly so there's no easy way to increase voltage but even with higher voltage the max is going to be around 900 - 950 Mhz.

    Unlike with Samsung HBM2 memory where the max is around 1100 - 1150 though it varies a bit but the GPU scales up to at least 1100 Mhz as long as the core clocks also keep up around 1600 - 1650 Mhz or so from my reading and what I've been looking into so far for Vega GPU's and their way of handling clock speeds plus just how well the core and memory scales at higher speeds. :)

    So I'm a bit capped in overall performance gains but in turn I can lower voltage down even further and keep clock speeds around 1450 - 1500 Mhz at 1050Mv or possibly even lower for a nice reduction in mainly heat but the reduced power draw isn't bad.
    Performance difference in benchmarks is going to be up to around 3 - 4% give or take depending on test but most games will be 2% or less really so cutting the default voltage and draw down to 200w or even lower is a pretty good win really.


    Vega 64 either the Nitro or the stock with water cooling is of course going to excel though and the binned GPU's for the water cooled model in particular also scale nicely at lowered voltage while retaining the higher core speeds without throttling allowing for the best of both so they're going to be around 7 - 10% faster or so I believe depending on game, above 1100 Mhz on HBM2 the gains start dropping a bit though so if the GPU core clocks here are similar at 1600Mhz or so or possibly even lower if it's a air cooled GPU plus blower fan then it might not benefit as much from having higher clocked RAM but it'll depend on workload and tasks, memory could still be really important for a few programs and a couple of games could perhaps also benefit even if core clocks aren't as high as they could be.

    Still have more to learn about how all this works though, overall GPU wattage should be around 150 - 180w for 200 - 220w total draw I think but depending on boosts and clock speeds and GPU load and demand this will fluctuate a bit but it's still better than stock with a minimal performance impact and easier to cool effectively. Hitting around 60 - 70 degrees Celsius now depending on game which isn't bad, could be better but still good results for the GPU temperature. :)

    I do keep GPU voltage a bit higher than probably needed just in case it spikes though so to keep things stable if anything happens, memory is mainly kept at 850 Mhz for now which is still a good gain but doesn't stress it as much as if I were to go for the cap at around 900 Mhz on this bios and particular GPU allowing for a small boost above the default 410 GB/s bandwidth. (480 GB/s for Vega 64)

    Performance gain from memory alone at 800 to 1050 Mhz seems to land around 6% or so but it was tested with a Vega 64 GPU (1650Mhz core clock.) though it means memory isn't the de-facto bottleneck on the GPU but it's still a nice boost if it can be maintained.

    Meaning it's unlikely to be the performance stopper since the resulting gains weren't that high from how large of a boost to memory speed the overclock had though it's still a gain but at 2% gain for every 10% memory overclock that's not the best returns but it's also free performance since voltage remains the same being capped at 1.35v and if it's stable without throttling due to thermals or such for timings.
    (Which is also where the water cooled model of Vega can really help ensure all components remain sufficiently cooled to avoid anything like that.)



    EDIT: Front loaded I believe is what it was called, it's the shader cores or clusters not being able to feed the GPU or not being utilized in full depending on workload plus while improved the card still has some geometry performance issues but it's not as bad with ROPS's and texture units and such as was assumed when the card launched. :)


    And little by little the drivers have kept up performance increases at 1 - 2% here and there over time and it adds up a bit too but unless the game was totally borked don't expect too much from just the driver but it adds up and together with keeping the GPU clocks high it's a nice overall boost in performance.


    EDIT: Though I still have more to understand and try to learn about how the GPU works and what it's limits are and where it's running into problems but well I doubt I'll understand everything but it's a interesting card so far and a nice piece of hardware.

    Will be interesting to see what Navi can do from here if they can get that launched in the first half of 2019 or so maybe to compete a bit with NVIDIA but they might still be faster with the 1180 or what the next series will be called and that's going to land them a few months as having the undisputed fastest cards on the market.
    (And if they slash prices a bit with Pascal and if the stock has a excess of GPU's that could also limit AMD a bit with the 1070 and up giving Vega competition.)


    But for now it's all just speculation.
     
    Last edited: Aug 13, 2018
    Dekaohtoura and OnnA like this.
  10. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    You need to remember also that with UV you can lower the V spikes ;)
    Also when UV (It needs to be done) you can end up with best Power Efficient GPU in the world with Vega64 (1440p Ultra Gaming in mind).
    Taken Forza H3 as an example, i have ~64-69tW w/CPU lol (Played #Forzathon yesterday, tested my new RAM setting with 1T + GD CL14-15-16-15 35-52)
    All Ultra + MSAAx4 & FXAA constant 70FPS (no single dip :D) @ 1717/1150 [1.087v &nfiniteFabric at 0.962v] but HWinfo shows ~1550Mhz Top.

    but when one look at WWW tests then it can be confusing somehow
    -> You see 250-340tW !!! lol which is simple Not TRUE at all :D
    But then again we are PC Enthusiasts not console Players (I meant Plug&Play)
    So we can & we will Tweak our H/W if we can :rolleyes: and with Vega uArch it is strongly recomended to give it an proper UV.

    Edit.
    Also played some Diablo III today (Season 14) All Maxed AA + Reshade 3.4
    HWinfo shows 1220MHz ! -> Yes, it's only that it takes to enjoy D3 @ 1440p Ultra (don't ask me about tW cuz' it's too low to mention ;) )

    Also when you look at FreeSync monitors sales at CaseKing or Mindfactory then you'll see that many ppl have Vega 64/56 & they are happy with it, along with FreeSync it makes Perfect Gaming combo of the decade.
     
    Last edited: Aug 13, 2018

  11. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Hmm, im just wondering How many Watts do you need when GDDR6 8GB or 16GB is used?
    Acording to some Samsung specs for 1GB is around 4W to even 10W (faster piece)
    So when LowerEnd Navi comes with GDDR6 (i hope it will get HBM) it will consume additional 48W or 96W for 16GB Model (taken average of 6W)
    IMO HBM_2 is the Only Future....

    PS.
    You need to remember that GPU GDDR6 are high v pieces ready for OC ;)
    So that numbers can get even higher.

    8GB HBM_2 takes ONLY ~18-24tW When OCed to Brutal 1200MHz ~630GB/s :eek:
     
  12. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
  13. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Side note, what is Radeon Rays?
    Yes ATI was first to innovate this tech.

    AMD had this technology on it's Radeon Pro cards before every other company and it was known as "AMD FireRays".
    Now it's now called "Radeon Rays".
    The $999 Radeon Pro WX 8200 has this technology, and it's most affordable Vega Pro on the market.

    Radeon™ Rays (formerly AMD FireRays) is a high efficiency, high performance GPU accelerated ray tracing software.
    By tracing the paths of light rays moving through a movie or game scene,
    Radeon Rays simulates the effects of light rays reflecting and refracting through an environment and interacting with virtual objects – for stunningly photorealistic 3D images.

    -> https://pro.radeon.com/en/software/radeon-rays/
     
  14. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
  15. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Editorial about RayTraced Gaming :D

    'All' my Arts are made in 3DsMax + Photoshop (with grain of Illustrator & some Quarked Prints for Friends)
    So i know Raytracing first hand....

    1. How many Traced paths will it introduce?
    a. 1 path is a meh (we have pre-baked 1 path already in Games) [see GstRender.PostProcessQuality 3 in BF1 and look at water pools]
    b. 2 paths? nahh it will be better but it's (Visually) not enough IMO
    c. so we have 3 paths, yay ! Yes 3 paths is OK for all games [As an example i'll give you this: You can see Guy in the 2nd. mirror in scene containing: Guy, and 2 mirrors, siluette is hiden and only visible through mirror reflection]

    2. What GPU will beneeded for such a task? (I mean Raytrace not some imitation gimmick)
    a. Vega 64 at least with Asynced OCL Raytrace calculation (we can get ~60FPS in 1440p with Perfect Optimisation in DX12 or VLK -> Yup :cool:)
    b. DX11 needs to go, when Big boys playing Raytrace.

    Yes we have Radiosity, and Refractions + Reflections which will add compliction to the scene (almost in Geometric scale)
    IMO, Raytraced Gaming is not here for sure, We need to look at DX12.1 & VLK to see what kind of Raytrace it will be (99% sure it is not what we have in 3DsMax or similar 3D softwarre)

    --
    UPD.

    -> https://raytracey.blogspot.com/2010/04/comparing-path-tracing-image-quality.html

    Real Raytrace:
    4K = 8,294,400 pixels
    If we targeting minimum of 30 frames per second then
    248,832,000 pixels per second to illuminate so 6 G.ray = 24 samples/pixel

    --
    Wikipedia:
    “CGI for films is usually rendered at about 1.4–6 megapixels. Toy Story, for example, was rendered at 1536 × 922. The time to render one frame is typically around 2–3 hours, with ten times that for the most complex scenes. This time hasn't changed much in the last decade, as image quality progressed at the same rate as improvements in hardware."
     
    Last edited: Aug 21, 2018

  16. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    Last edited: Aug 21, 2018
  17. Only Intruder

    Only Intruder Maha Guru

    Messages:
    1,113
    Likes Received:
    141
    GPU:
    Sapphire Fury Nitro
    I think the ray tracing we'll be seeing implemented soon is a "simplified" solution, such as low sample of rays cast at half-precision and then using a noise filter to clean up the image.

    With this being the case then I can see GCN already being capable at it using asynchronous compute and certainly more so with Vega having double rate FP16 and also remember, GCN supports tensorflow - so similar calculations can be performed on AMD hardware (how effective it is I don't know but I've seen reports that the RX 480 has the same output as a 980 ti in tensor workflow).

    The big news front from NVidia about their RTX hardware is mostly just the software package they've gained from their use of AI development to refine denoising software, which in their terms will be hardware+gameworks software package which will obfuscate and segment the market as we've seen in the past with PhysX and other gameworks libraries.

    Personally, I reckon NVidia are making a powerplay with Turing to try and seize control of ray traced augmented rendering standards.
     
  18. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    I have one word :D
    2005 F.E.A.R. (I have ATI 1950 Pro 512GB back then)
    Those Volumetric Lightning & Shadows + Caustics where fenomenal.

    As you can see, we have Good Tech.
    Now enter Today game and show me similar efect? :p
    Where is it? Why it isn't here any more? Questions...

     
    Last edited: Aug 21, 2018
  19. z8373767

    z8373767 Member Guru

    Messages:
    176
    Likes Received:
    50
    GPU:
    Fury Nitro 1070/600
    This effects is very common in modern title:
    BF1, RE7, Evil Within, Hunt: Showdown, Witcher 3 (remember Magic Lantern in Towerful of Mice?)
    Better question is: "where is F.E.A.R physics and AI?"
    I mean when you fight with enemy it was very effective, flying paper from desk, hot steam from bullets in wall (Max Payne 3 bring it to next level) and enemies "think" on battlefield.
    I had silly situation once, in F.E.A.R of course. I hide in small room with one exit, so enemies rush this entry and throw grenade on me. I tried to escape and they just killed me xD

    We have 6-12 threads in CPU. Devs, Use this for physics and AI, not Denuvo or WMProtect :/
     
    OnnA likes this.
  20. OnnA

    OnnA Ancient Guru

    Messages:
    9,482
    Likes Received:
    1,938
    GPU:
    Vega 64 XTX LiQuiD
    -> at 4.09 you'll have 2xVega Pro DaVinci Edit :D

     
Thread Status:
Not open for further replies.

Share This Page