NVIDIA Enables DXR Real-Time Ray Tracing Support on GeForce 10 and GeForce 16 Series, GeForce GTX 10

Discussion in 'Frontpage news' started by Clawedge, Mar 18, 2019.

  1. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    you clearly didn't read what I wrote:
    "
    developers know that Pascal is way way more popular than Turing, they can simply use DXR very very lightly for minor things, focusing on the main consumer base(also AMD will probably support DXR without dedicated acceleration hardware)
    "
     
  2. moo100times

    moo100times Master Guru

    Messages:
    577
    Likes Received:
    330
    GPU:
    295x2 @ stock
    @DrKeo - Ok that is what Nvidia is saying, which it is turning out to be different to what is possible and what might be.

    If they are saying the same workload can be done using INT32 part of gpu chip, then what are RT cores offering over this, and what extra hurdles do developers need to overcome in order to make this work without tanking gaming performance? If these are features native to DX12 and general implementation, why would you spend the extra effort on developing for these cores generally, but even more so if you are planning to port games to consoles which are AMD based hardware?
    Similar to what HardwareCaps says, I think this completely undercuts the whole RTX line. Not only is raytracing possible another way, but also can be implemented in a more developer friendly way, and across more hardware. In many ways, Nvidia might have done better to push the RTX narrative longer and locked out other changes a bit like Physx before giving up the ghost, rather than 6 months after the release of an expensive new line? If I bought a 1000 USD graphics card for this and then saw this, I would be pretty annoyed at this announcement.
    You talk about raytracing being better through having dedicated hardware implementation for it, but RT core use doesn't seem to be compatible yet with a proper, decent FPS gaming experience, which means that this is kind of a moot point as they are not of real benefit yet to the end user yet. If this DX12 feature can be applied to these cores and bring about a massive increase in performance, then fair play to Nvidia, they will save themselves, but they have not demonstrated this yet.

    Also Nvidia did not offer a comparison for the 1660Ti to their RTX cards, so no way to support or refute this. Comparing it to a 1080Ti does not really support the claims as it has little to do with what was announced for their new budget cards that supposedly lacked this feature natively, and yet can be implemented with a driver update.
    The paranoid side of me wonders if there will some careful balancing of raytracing performance on these cards to not outperform their other gpus, but if AMD cards end up performing well in this, there will be some serious confusion as to what cards offer real bang for bucks.
     
    Last edited: Mar 19, 2019
  3. metagamer

    metagamer Ancient Guru

    Messages:
    2,596
    Likes Received:
    1,165
    GPU:
    Asus Dual 4070 OC
    way lower? When the 2080 costs the same as RVII, why would they price it lower? Think before you post.
     
  4. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    Why are you comparing old and new generations anyway? The 1600 series has shown nice performance in traditional GPU work as well. Furthermore, you are only comparing Nvidia tech against... Nvidia tech with an obvious Nvidia bias as they want to sell their expensive RTX cards (which, btw, Nvidia itself has confessed haven't been selling that well). You are basically saying a game built to support the specific tech of dedicated hardware is not running so well on non-dedicated hardware. Really, now, who would have guessed?

    I'm not against raytracing. In fact before the outrageous prices were announced, I was seriously considering 2070. However, since then the raytracing price-performance based on the RTX hasn't looked too impressive. Right now I'm hoping raytracing can take a more reasonable path. I'm not believing the Crytek demo blindly, but if it works half as good as the demo suggested, it could be worth developing. In the end, if it's not dedicated hardware, it won't sit still and useless in non-raytracing games.
     

  5. DrKeo

    DrKeo Guest

    The 1660ti is basically a 1070gtx in performance, if a 1080ti can't hold half the performance of a 2060RTX what do you think a 1660ti will do? Third the performance? a quarter?

    Wake up, there is no black magic that will make a general purpose GPU as fast as dedicated cores @RT. I know you all really like conspiracy theories and would love to prove somehow that NVIDIA actually placed swiss cheese in the RTX cards instead of RT cores, but tehe RT cores are there and they accelerate ray -> box intersections and ray -> polygon intersections while any other NVIDIA or AMD card will have to run it in general compute, it will just be slower.

    I've got news for you, RT won't run on cards without dedicated RT hardware as well as on cards with dedicated hardware.

    I'll also add a thinking point for you, if all the RTX games use DXR, why haven't AMD unlocked DXR in their drivers yet? DXR can run on any DX12 card so why not let us see their numbers VS the Turing cards?
     
    Last edited by a moderator: Mar 19, 2019
  6. Glidefan

    Glidefan Don Booze Staff Member

    Messages:
    12,481
    Likes Received:
    51
    GPU:
    GTX 1070 | 8600M GS
    You can already use your GPU to make raytraced shadows in game engines. Unity does it through a GPU lightmapper and you get a 100x jump in speed sometimes/ in the number of rays you can shoot out.
    But even that, you'll have to wait for a bit to get a single frame. So no. Even "small" things, need a lot of calls and work to do.
    A single puddle that will only reflect the player and nothing else, will add a ton more of draw calls.

    It's the difference between having dedicated hardware doing things in parallel with having to do a pass, then empty the registers, load new instruction and do something else, then back to what you were doing before.

    What nVidia did here is to allow older cards to run raytracing, people see how hard it is on the previous super duper cards and say "oooh. ok."
     
    Aura89 likes this.
  7. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    This is an official Nvidia slide from the show:

    [​IMG]

    the 1080Ti does fairy well here, from the slide the performance difference doesn't seem to be massive between the 2080 no RT and 2080 with RT.
    a version that would be optimized for non-RTX cards is likely to be viable and that version will also be able to run on AMD & Consoles.
    it's a no brainer for developers
     
    HARDRESET likes this.
  8. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I don't know why you think developers need to develop for "RT cores" - all RT cores do is accelerate a specific part of DXR. Developers use DXR to cast rays, RT cores accelerate their traversal through the scene. That's it. That's all they do. On Pascal they don't get accelerated on Turing they do. The work for developers is identical.

    To be fair the performance delta in that game was the closest of all them. Also I would think that any optimizations you do to non-rtx cards would likewise improve performance on RTX cards.

    You have to remember that raytracing is a catch all term for a bunch of different techniques. SVOGI that Crytech uses for it's implementation has been in the engine for years now - all they are doing here is applying it to reflections. The technique was originally developed by Nvidia for the Unreal engine. Nvidia then redeveloped it for VXGI which you might remember from the Nvidia moon demo. I think there are a few games that use it for AO/GI but it has it's own set of issues with light bleed and ghosting due to it's use of cones instead of individual rays as a source.

    The closer and closer you get to what path-tracing effectively does the more realistic your lighting will be. DXR's/RTX approach is much closer than SVOGI.
     
    Last edited: Mar 19, 2019
  9. DrKeo

    DrKeo Guest

    But developers don't develop for the RT cores, they develop using DXR which is a hardware agnostic DX12 API for ray tracing. If AMD will enable DXR in their drivers tomorrow, they will have RT. The question is why AMD isn't enabling DXR in their drivers? Probably because they don't want you to see a 2060 beat Vega 64.

    This move doesn't undercut the RTX line, it's their marketing that was bad. RT cores are to RT like GPUs are for 3D graphics. 100% of what a GPU does you can do on a CPU, GPUs just do it faster. NVIDIA shouldn't have marketed RT as something only an RTX card can do, they should have marketed RTX as a card that handles RT like no other card without RTX.

    That's it, RTX card just does RT faster, that's it. RT was done on CPUs in the '70s, there is nothing new about it. All NVIDIA has done was to make a core that is really good at calculating ray intersections instead of brute forcing it so we got RT a few years earlier. That is the selling point, showing how a 350$ 2060 beats a 700$ Radeon VII in RT.
     
    Last edited by a moderator: Mar 19, 2019
  10. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    To but in, you're completely right. RT is just there to accelerate Ray tracing and improve performance.
    that issue is that RT cores are a huge minority within gamers. Pascal is extremely popular and it's not going to change soon, AMD doesn't have RT cores and same for consoles(probably next gen as well)
    so Developers can choose either to target the majority of the market with light DXR that can still run "fine" on non-RTX cards or push DXR hard and only run "fine" on RT based GPUs.
    I wonder what's going to happen..... ;)
     

  11. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    I'm not sure what 1660 Ti will do, but probably not much trying to emulate Nvidia's RTX that more or less requires special hardware but still doesn't perform that well. 2080 Ti should have a huge amount of horsepower in traditional GPU work, but the RTX is dragging it down. When 1300 dollars can't handle the RTX portion of the work well enough to make the whole level of performance fitting for a card so expensive, then it simply means the RTX as Nvidia imagined it doesn't work. It needs redesigning. 2080 Ti is clearly a 4k card, but the RTX doesn't allow it sufficiently from what I've seen.

    Yeah, except for the sad fact the dedicated cores aren't fast either with the way they are currently used.

    I have no interest in your conspiracy theories or anyone else's conspiracy theories for that matter. I'm happy Nvidia tried to have raytracing included in games, as it looks good, but currently it doesn't work as well as it should, and the price is too much. It needs a lot of development. It would be splendid to have development that didn't make it Nvidia exclusive, either, because nobody sane likes monopolies, except for the owner of the monopoly. Currently anything Nvidia says needs to be taken with a dose of salt because they are desperate to fix the disappointing sales numbers of the RTX cards. That's just business as usual.
     
  12. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    The question is, what is the target. the slide there shows 1440P with RT on Ultra.
    what about low RT? if it can run as well as Ultra on the 2080, there's no reason for Pascal owners to upgrade.
     
    HARDRESET likes this.
  13. Rx4speed

    Rx4speed Member Guru

    Messages:
    146
    Likes Received:
    66
    GPU:
    R9 295x2
    I don't care what the card(s) are called, RTX, GTX. If I cannot run games @ 1440p near 160fps ( i have a 165hz g-sync monitor), I will NEVER turn on Ray Tracing. It's worthless to me right now. I own a 1080ti. Fortnite is this example. Battlefield games, it's more like 120, but no way I'm playing a shooter at the FPS obtained with ray tracing on with FPS around 60. NO WAY.
     
  14. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    The issue is: Can RT cores justify the price increase & an upgrade over Pascal or AMD based GPU?
    From the slide, It seems like it's not worth it.
    the 2080 without RT cores runs at roughtly 40 FPS while using RT cores it runs at around 50 FPS.... that's 25% increase in performance, for hardware dedicated cores.
    and both run at sub 60 FPS.... so I don't get this.
     
  15. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Lol I mean I don't think there is a reason for Pascal users to upgrade anyway. Nothing they've shown me with RTX i've been "wow I really need that" yet. It would be cool to have.. if the card also came with a 40% performance upgrade (in traditional rendering) for not $1200 but I feel like it's something I'd never drop a bunch of money on specifically for that feature.

    I also kind of wonder where tensor cores comes into play with all this. We know DICE doesn't use them for BF5 - what about all these other games?
     
    fantaskarsef likes this.

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,379
    GPU:
    GTX 1080ti
    ^ This

    its also why the NVEnc is a dedicated fixed function unit, its faster and context switches don't bog down rendering.
     
  17. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Well that's an issue since most of the market is still on Pascal. and this is why Developers will allow for lower DXR options which will really hurt Turing.
    also the cost increase for the RT cores makes little sense, as I wrote above.
    So I think it's a desperate attempt to push Ray tracing but it's going to hurt them eventually.
     
    HARDRESET likes this.
  18. DrKeo

    DrKeo Guest

    Why do you keep saying things like "emulate"? You are aware that all RTX games are created using DXR and even NVIDIA's RTX API is writen over DXR, right? There is no emulation, RT calculation is one of the most basic actions a computer can do. The only problem is that it needs to do billions of them per seconds. RT cores aren't something you write for, they are just experts at calculating ray -> box intersection and ray -> polygon intersection, it basically just 7th-grade math on a chip.

    Take a look at Metro, a 350$ 2060RTX gives us more than X2 the performance of a 1080ti. Do you really think that Vega 64 is going to give you that? That some 14TFLOPS Navi card will somehow magically give you X times the 1080ti in compute? It's not going to happen.

    You are contradicting yourself. On one hand, a regular GPU that uses compute to do RT is amazing and you don't need RT cores but on the other hand an RTX card that gives you x2 or x3 or x4 the performance is sad and not fast enough? So doesn't that make cards without RTX x3 sadder?

    RT is hard and requires a lot of compute power, RT cores are used to accelerate that. a 350$ 2060 RTX card is pretty good value vs NVIDIA's old cards and AMD's current cards and it will probably outperform any GPU that doesn't have RT cores for the coming years.

    RTX cards aren't priced too high because of the RTX hardware, they are priced too high because NVIDIA is a monopoly in the high end and as long AMD can't compete, NVIDIA cards will be overpriced. RTX hardware isn't 50% of the chip, it's much smaller, closer to 15% (if we look at the 1660ti vs 2060) and that's RT and Tensor cores combined. Yeah, you could have made the RTX cards 10% cheaper without it, but as long as AMD isn't competitive at the 500$+ segment, NVIDIA will do whatever they want.
     
  19. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Yeah but that will change eventually. I imagine the next generation will also have RT cores or some other accelerated method and I'm sure AMD is working on it's own dedicated silicon and/or method to accelerate the BVH traversal. I think Nvidia's point is just to get people interested in the idea of RT in general because right now everyone associates it with their brand.
     
  20. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Yes and it's a risk. the risk is that RT cores won't justify their price increase and that people will see no value in RTX, a 25% performance increase when ray tracing is enabled is not enough.

    if we have to wait for next gen to justify RTX, Nvidia are in big troubles.
     

Share This Page