NVIDIA Enables DXR Real-Time Ray Tracing Support on GeForce 10 and GeForce 16 Series, GeForce GTX 10

Discussion in 'Frontpage news' started by Clawedge, Mar 18, 2019.

  1. moo100times

    moo100times Member Guru

    Messages:
    147
    Likes Received:
    57
    GPU:
    295x2 @ stock
    @Denial - thanks for that. Did not realise they were natively supported through driver + DX12, thought it was another hairworks or other proprietary type scenario. In that case, even more keen to see side by side performance figures of current gen RT vs non RT cores.
    That being said, the fact that Nvidia are choosing to enable this feature in any way on their lower end cards that do not have the unique hardware for which you are paying a premium for is damaging their own brand, particularly as it has been pointed out that AMD do not support this particularly on their cards yet anyway, so little competition to worry about yet they do this. Why?
     
  2. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    It depends on how many rays you have to throw to the scene. Metro, for instance, runs 66% faster on a 350$ 2060RTX than a 700$ 1080ti.

    In the end it's performance vs value. If more developers will use DXR then more games will work faster on RTX cards. It's like buying a 1080ti vs a 1070GTX, you need to see what works for you price/performance wise.
     
    Last edited: Mar 19, 2019
  3. Undying

    Undying Ancient Guru

    Messages:
    12,638
    Likes Received:
    2,047
    GPU:
    Aorus RX580 XTR 8GB
    Thats a wow factor nvidia is going for. It runs so faster on a 2060 than 1080ti i must get one of those RTX cards. Desperate times call for desperate measures.
     
  4. Astyanax

    Astyanax Ancient Guru

    Messages:
    5,341
    Likes Received:
    1,581
    GPU:
    GTX 1080ti
    spoken from a position of pure ignorance and disregard for graphics history.

    You know most people picked up Mx series Geforce cards when the first combiner style (PS1.0) came out, it wasn't the Geforce 3's and 4's people were chasing, the capabilities weren't all that great then, nor with the SM2 cards that followed up.

    it wasn't till the 4th generation of Pixel shader that nvidia got all the dots checked off, 1.0, 1.3 were disregarded as faster fixed function cards existed, 2.0 was a hard fail, and 3.0 is when nvidia finally got the feature set and functionality right.

    Same again with Fermi, first generation of Tesselation and they didn't get it good till Maxwell.

    The initial releases of tech pushing hardware are not about you as the consumer.

    [​IMG]


    I don't feel bad at all for the people who paid 250 more for the GT200a, they also cried when the GT200b released at higher performance / less cost.

    The price difference between those two chips of the same architecture are also what we can expect of a TU10x shrink in savings.
     
    Last edited: Mar 19, 2019
    Aura89 and airbud7 like this.

  5. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    Yup, it doesn't matter if it's the Crytek demo or BFV, if people will see that the RTX cards perform much faster all that "what is this RTX for anyway" talk will go away. I mean it's cool that the Crytek demo runs 30FPS @4K on a Vega 56, but if the 2060RTX runs it @80FPS then NVIDIA's PR has done its job.
     
  6. H83

    H83 Ancient Guru

    Messages:
    2,882
    Likes Received:
    469
    GPU:
    MSI Duke GTX1080Ti
    My 1080Ti can do Ray tracing, yay!:D

    Too bad RT effects are basically useless right now...
     
  7. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,880
    GPU:
    EVGA 1080Ti
    It probably won't though because SVOGI isn't accelerated by RT cores. It's like a completely different technique that utilizes a grid and cones to do raytracing - it doesn't utilize DXR at all.
     
  8. Kaarme

    Kaarme Ancient Guru

    Messages:
    1,815
    Likes Received:
    537
    GPU:
    Sapphire 390
    I say emulate if you use general purpose shader cores to do the work of specialised RT cores.

    Let's get back to this once turning on the RTX will not drop the fps more than a couple of points at max. That's the whole point I'm trying to make as far as dedicated RT cores go. If RTX 2080 Ti can play anything at 4k with RTX off, it should play anything at 4k with RTX on. If it can't, then the RT cores aren't fulfilling their one job suffiiciently and the extra cost is not justified.

    I'm not contradicting myself. There's no way a regular GPU could do the same work as a GPU containing dedicated RT cores, unless it's a never seen before GPU. That's why I'm saying the RT itself needs much work to be really usable. Because it's not right now. I don't care if MS is behind it either. Everybody knows half of the stuff MS makes doesn't work in the first place. Just look at those ridiculous Win10 updates that end up making countless Windows PCs unbootable every time. The RT currently accelerated by the Nvidia RTX is just one way of implementing a raytracing effect in games.

    Don't make it so hard even a bloody 1300 dollars video card can't handle it. You can't sell stuff like that.

    Looking back at Huang's presentation, RTX was their major excuse for making them so expensive. People didn't end up buying it, though, made evident by the disappointing sales.
     
  9. HardwareCaps

    HardwareCaps Master Guru

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    1. u didn't account for inflation, 300$ in 2000 is not the same as 300$ in 2016. so the whole chart basically is pointless.
    2. my point stays strong, Nvidia is trying to make RTX their propriety advantage. just like pixel shading or tessellation, the feature took off once it was affordable and the competition had it as well.
     
  10. anders190

    anders190 Member

    Messages:
    22
    Likes Received:
    15
    GPU:
    1080Ti Phoenix GS

  11. Kool64

    Kool64 Master Guru

    Messages:
    554
    Likes Received:
    194
    GPU:
    Gigabyte GTX 1070
    Would be interesting to see if it can work with a dedicated GPU like PhysX did.
     
    Strange Times likes this.
  12. Loobyluggs

    Loobyluggs Ancient Guru

    Messages:
    3,368
    Likes Received:
    586
    GPU:
    EVGA 1070 FTW
    I'm interested in having the option in UE4 for baking lightmaps etc, that's pretty nice, being a 1070 user
     
  13. SamuelL421

    SamuelL421 Member Guru

    Messages:
    126
    Likes Received:
    77
    GPU:
    1080ti / 11gb
    My thoughts exactly, enabling DXR without RT cores is going to be unimpressive, if not unplayable, on Pascal. The whole thing is market stunt designed to make RTX cards look more impressive by comparison.

    Now we just need a game that supports both DX12 mGPU (SLI) and DXR. The SLI info I've seen for BFV and Metro are all hacked and using DX11. Assuming you could run 2 Pascal gpu's with decent scaling in DX12, DXR enabled - then you could potentially get acceptable fps even with ray tracing. It doesn't look like Nvidia has any plans to contribute meaningfully to mGPU / SLI in the future. It's up to the community and developers if we ever hope to see a good DX12 solution for more than one GPU.
     
  14. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,773
    Likes Received:
    1,547
    GPU:
    HIS R9 290
    I doubt you can do this, but I think it'd be cool if you could take something like a 1060 3GB and use that as a dedicated DXR processor. But, I don't see why Nvidia doesn't just release a discrete card composed of RT cores either. It's still their branding and product so even if AMD users buy these AIBs, Nvidia is still making money.
     
  15. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,880
    GPU:
    EVGA 1080Ti
    Because all the RT cores do is accelerate one part of the entire RT pipeline (the traversal and intersect through the BVH). It would be like having a dedicated card for pixel shading. The cost of pulling/pushing the BVH data to and from the dedicated card would probably cause a bigger performance penalty then you get from having a card dedicated in the first place.
     

  16. HARDRESET

    HARDRESET Master Guru

    Messages:
    520
    Likes Received:
    189
    GPU:
    1080Ti G1 GAMING OC
    look here!
    Just like Blizzard wanted DX12 on 7 , and now this, well well show me the MONEY :D
    [​IMG]
     
    Last edited: Mar 19, 2019
  17. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    1) We talked about the reflection ray tracing demo Crytek did, not SVOGI. Ray tracing will be accelerated by the RT cores as long as it is made using Vulkan or DX12 RT API.
    2) SVOGI can be accelerated by the RT cores. SVOGI is based on the intersection between a ray and a voxel (a box), half of the RT core is a unit that specializes in ray -> box intersection. The only problem is that Crytek used their own implementation so they will have to re-do their SVOGI to use DXR or Vulkan RT API.

    It's not emulation. It's a classic compute task being handled by the GPU. RT isn't HairWorks, it's not a feature that NVIDIA built. Doing RT is simple as hell, the RT cores are pretty stupid and can do only two actions. It's like building a chip that's really good at multiplying 5 by 6, you can run it on a CPU or a GPU and it won't be emulation but it will run faster on the specialty chip.
    This isn't how the world works. When you use RT there is extra compute work, the RT cores just handle the heavy lifting of collision detection, the rest of the GPU still has to calculate some of the RT elements like the texture color or the shader of the hit surface. You can't get it for free but you do get a huge speed bump. It's a cool new tech, when was the last time cool new tech just worked at max resolution at max fps? It's not like when Geforce 3 came out everything had shaders and performance was super amazing, games with shaders got hit hard on the Geforce 3 but it was a revolution. If you want to play everything in 4K 60, wait for the 3000 series. But I will tell you this, don't expect non-RTX cards to give you 4K/60.

    In the end RTX cards are expensive because NVIDIA doesn't have competition, they are a monopoly. They can charge whatever they want and they are using the RTX features to justify the price. Just look at the ~300$ range where AMD has some power, suddenly NVIDIA is competitive too with the 2060RTX.
    There is only one way to implement RT, throw rays into the scene and look for hits. It's called physics. If you are looking at really low res solutions like SVOGI, it is still accelerated by RTX if you use DXR to code it because every RT effect ever will use a single concept that every RT method has, detecting collision between a ray and an object. That's what the RT core does, that's what real light does, it hit things and bounce. There isn't any other way to do it, you can do SVOGI which is just RT with a very low res world representation, but it is still rays intersecting with objects which is what the RT cores do.

    You are looking for black magic, doing RT without doing RT, it doesn't exist. You can throw fewer rays, you can do smart things to get faster results with things like lower res world representation, but in the end, you need to throw rays and calculate collision between them and the world and that's what the RT cores do. So use any method you will like, the RT cores will accelerate it.

    I guess you know nothing about RT. Doing a fully RT scene in a modern game won't run @1080P on the 1200$ card, nothing will run it smoothly. It has nothing to do with NVIDIA, it's just the way RT works. You need to do X calculations per seconds so you just have to brute force it, get to a point in time that the cards are powerful enough to do RT at the common resolution of that time.

    An excuse is a good description, Turing is a huge die with or without RTX and their profit margin is probably huge because they control over 80% of the market. If AMD had a 2080ti level of performance for 800$, the 2080ti would have been ~800$.
     
    Last edited: Mar 19, 2019
  18. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,773
    Likes Received:
    1,547
    GPU:
    HIS R9 290
    That makes sense, but to my understanding, it's already kinda a discrete processor to begin with. The only difference is by integrating it into the same PCB, you get DMA (with the VRAM) and don't get PCIe overhead. But, if the discrete processor has the same necessary data as the GPU, DMA hardly matters*, so that just leaves PCIe overhead as the main issue. However, since RT is so slow to begin with, I don't think the overhead is all that relevant.

    * Of course, there is a lot of in-between I don't know about. For example, I'm not sure what the order is of when things are calculated. So for example if the RT cores are constantly sending and receiving data many times for a single frame, then yeah, a discrete card wouldn't be a viable option at all. But, if all rays are traced "in one go" per frame, then I think it would be possible.
     
  19. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,880
    GPU:
    EVGA 1080Ti
    I just want to point out you're quoting my name for things Kaarme said. It's not a big deal it just confused the hell out of me lol
     
    Kaarme likes this.
  20. Only Intruder

    Only Intruder Maha Guru

    Messages:
    1,211
    Likes Received:
    168
    GPU:
    Sapphire Fury Nitro
    The way I see it, Pascal doesn't have concurrent execution (since it has to switch between graphics and compute workloads) which is why it suffers such a huge performance penalty. If the tasks get scheduled correctly (what nVidia do with their gameready drivers anyway) it wouldn't be so bad but again, it would still be slower because of the context switching.

    It's not that Pascal doesn't have the power, it looks to me it's a scheduling issue which we know Nvidia wont optimise drivers so that Pascal can make use of RTX.
     

Share This Page