Battlefield V gets support for Nvidia raytracing technology - demo

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 21, 2018.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
  2. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    In b4 performance impact from character blinking. But for real, is it nVidia's API? Or is it DX12 standard raytracing run via nVidia HW?

    Those things should be differentiated. In one case it is nightmare come true, in other it is bad marketing.
     
  3. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,649
    GPU:
    4090@H2O
    RTX is Nvidia's implementation of DX12 DXR. AMD can't run RTX for what I understand. AMD has to implement it into their drivers seperately (if not already done in some form).
     
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I am talking about implementation. Game either uses DX12 code written by someone in studio, or uses nVidia's proprietary library optimized for their HW. (With usual consequences for other companies.)
     

  5. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Such a lame event. They talked about Ray Tracing (which is gonna support like 3 games for the next 6 months) 90% of the time.
    AI cores with no demo, no perfomance comparison, no roadmap as to when this is coming.
    They didn't say a single thing about Gaming Perfomance, Efficiency/Power. and they are so expensive. getting a cheap Pascal on a deal would be my goal.
     
  6. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,649
    GPU:
    4090@H2O
    Or, AMD partners to bring their own optimized library for Vega / Polaris / Navi.
     
  7. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    That's the thing. Developers are not going to make 2 separate code paths. DX12 was meant to be above that and there could be different visuals as result too which would be very undesirable for any multiplayer game.
    Then there is fact that AMD's "library" would be open source and nVidia could optimize quite easily if dev used just theirs.
     
  8. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,649
    GPU:
    4090@H2O
    Sadly that story has been going on for years now, exactly like that. If anything it's Nvidia's money making a closer / earlier adaption into RTX possible as they spend $ (manhours) into working together with m$. Only shows how DX12 is kind of lackluster as to what it was hailed as, and what it really is, YEARS later.
     
  9. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    The goal behind DX12, aside from the CPU overhead stuff, was just to give developers lower level access to hardware. Picture a machine with 20 layers, in DX11 developers had access to layers 20-10, 10-1 was a complete mystery to them.. that's where AMD/Nvidia engineers reside and write driver code. With DX12 developers now have access to layers 20-5. The drivers still play a role but the developers can see/change much more going on at the very core of the system. The lower you go the more complicated things get, the more you need to get in there and write things for specific vendors, architectures - things Nvidia/AMD driver teams have been doing for decades now in the hands of game developers.. but the lower you go with optimizing the more performance you get out of those systems.

    The thing is most devs don't care to do that work. They have a budget for their project and why would they dedicate 7-10 developers to re-architecting low level code that Nvidia/AMD already handles pretty well? Like at most you get 10-15% performance in certain scenarios, pretty cool, but you can do the same by just like cutting a few poly counts down with a slider, which doesn't require a dedicated team. The big engine guy's - companies like Epic/Unity/DICE with Frostbite, etc.. those guys have a reason for it. They are architecting engines which many games utilize.. so they already have those teams in place with developers that are familiar with that level of development. But they still can't optimize for every scenario that random company A utilizing their engine is going to do. So even there they aren't going down to the metaphorical "level 5" of optimization.

    Most of Nvidia's GameWorks libraries are open source now - RTX libraries use DXR as the base so as long as the RTX library is open AMD shouldn't have a problem accelerating. I don't think Nvidia has anything to gain by keeping the RTX libraries Geforce only - I think they are going to rely on the fact that they've been dedicating the last 10 years of R&D to deep learning/self driving, are super far ahead there and can use that to make their hardware better at accelerating. The goal for them should be mass-scale adoption of DXR, which seems to be what they are trying to push.
     
    fantaskarsef likes this.
  10. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    If the Tomb Raider RTX performance reports are anything to go by, then tbh I could care less about proprietary code. The performance needs to be increased by 100%+ to make it even viable and if that means Nvidia needs to take it in-house, then, so be it. Long-term is another story, but, seeing as AMD and Intel can't run RT at anywhere near the performance of Nvidia with their hardware, then, it's meaningless to them anyway.
     

  11. Mundosold

    Mundosold Master Guru

    Messages:
    243
    Likes Received:
    108
    GPU:
    RTX 3090 FE
    I feel so old knowing that running after the latest GAME-CHANGING gpu tech is always a waste of money. Remember DX10 hype? Everyone running out and spending huge on the first DX10 video cards? By the time DX10 was used in any meaningful degree, those cards were too obsolete to handle DX10 games anyway. You end up paying a huge early adopters fee to get nothing out of it. I guarantee that when RT takes off, the first game to make big use of it will barely even be playable on a 2080 Ti.
     

Share This Page