New UL Time Raytracing Benchmark Will Not be Time Spy

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 28, 2018.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    37,027
    Likes Received:
    6,101
    GPU:
    AMD | NVIDIA
  2. slyphnier

    slyphnier Master Guru

    Messages:
    706
    Likes Received:
    47
    GPU:
    GTX1070
    its kinda funny for me,as its like splitting things, RayTrace is like have its own part/path
    rather than global like tessellation, shaders, multi-thread etc

    well i guess its because card without raytracing cores perform so bad compared to card that have dedicated raytrace (RTX)

    until now raytrace been always expensive things in CG, till nvidia bringing it with turing

    well i guess for sometime, raytrace will be "exclusive" till depends on how market goes
    if it become common things, then it will considered a global-factor
     
  3. cowie

    cowie Ancient Guru

    Messages:
    13,205
    Likes Received:
    299
    GPU:
    GTX
    I am all for something new to the gfx benches another bench cant hurt
    I think rt will be common in a few years and till someone brings something better it will catch on.
     
  4. Elfa-X

    Elfa-X Member Guru

    Messages:
    125
    Likes Received:
    6
    GPU:
    Aorus 1080 Ti EE
    I wish Futuremark would hire some talent. More than a decade in and their Graphics still look terrible and unoptimised.
     

  5. Lane

    Lane Ancient Guru

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Some parts need to be packed in BVH and this way different of how are working the current engine.

    Well it is really hard to compare raytracing on 3d modeling softwares and raytracing on DirectX ray tracing or RTX... first you dont render complete frames with DirectX and RTX, only shadow, reflection. It dont cover all light sources as in 3Dmodeling raytracing.. (hello architecture interior rendering)... Then ,There's not one way of doing raytracing, raytracing is a generic technical term. each engine use different allgorythm ( pathracing, bidir ) and different samplers ( metropolis, Sobol ( LuxcoreRender) etc ), different light strategy. etc etc.. It is allready really hard to compare 2 renders API on CG softwares, as many things differ ( Cycles, Luxcore,Vray, etc )

    This said i will wait to see how the RT cores work and if they really speedup the renders on 3D softwares that i use ( Blender, Max, Maya, substances etc ) ..Need to see the compatibility of the engines with OptiX ..
     
    jura11 and chispy like this.
  6. ingeon

    ingeon Member

    Messages:
    17
    Likes Received:
    3
    GPU:
    GIGABYTE GV-N460OC-768I
    https://www.imgtec.com/blog/gdc-2016-ray-tracing-graphics-mobile/?cn-reloaded=1

    That was a rather interesting read after watching the below video.
    So is the above new benchmark plain marketing for their next benchmark addon\standalone package ? Is adding ray tracing really that complex?

    The article mentioned OpenGL ES extensions\Vulkan in Unity in 2016...

     
    Last edited: Aug 29, 2018
    lucidus likes this.
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,939
    Likes Received:
    2,292
    GPU:
    5700XT+AW@240Hz
    Yeah, I remember all their demos. They made GPUs directly capable to do it. And they are making low power GPUs for quite some years.
    Adding raytracing via hack into current DX/Vulkan implementation is not that hard if you know how. But adding standard part of code into DX12 took some effort.
    Adding raytracing units into GPU can be done in much cheaper way than nV did. In both transistor cost paid by nV and clients and power efficiency paid by clients.
    nV just threw at it tech, they already had.

    I always hope that old dogs who run from PC market may return one day.
     
  8. sverek

    sverek Ancient Guru

    Messages:
    5,594
    Likes Received:
    2,458
    GPU:
    NOVIDIA -0.5GB
    As far as people support and pay Nvidia for their exclusive implementations, things not gonna change any soon.

    We again have to wait for AMD to come around and introduce open ray tracing method that works on any hardware.
    Till, then we pay premium to Nvidia.

    To be honest, I am glad Nvidia doing it. Someone has to start pushing new tech. And preordering RTX hardware may seem stupid, but it does invest into further development.
    Just as people bought early Tesla cars.

    So when Jensen Huang mentioned talking to his employees in his Nvidia Ray Tracing presentation: "Cmon guys, we have to make things look like things". It kinda made sense.

    With benchmarking software, this obviously limited to Nvidia, so we rather see specific benchmarks aimed for Nvidia GPUs.
    Was there benchmarks for PhysX back in old days?
     
    Last edited: Aug 30, 2018
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,939
    Likes Received:
    2,292
    GPU:
    5700XT+AW@240Hz
    Nope, wrong, from bottom-up. Benchmark here will be DX12 and will run on all HW depending on driver being compatible with required DX12 feature.
    Same goes for PhysX benchmarks like fluidmark, it benchmarked technology on all supported HW which went beyond nVidia's. That's why you can run it on your CPU even today.

    Doing Benchmark which could run only on those 3 GPUs from same architecture should not be called benchmark. It should be called showcase.
     
  10. sverek

    sverek Ancient Guru

    Messages:
    5,594
    Likes Received:
    2,458
    GPU:
    NOVIDIA -0.5GB
    But, can ray tracing be performed on CPU as PhysX or non-RTX GPU in first place?
     

  11. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,939
    Likes Received:
    2,292
    GPU:
    5700XT+AW@240Hz
    Yes, you need to hold entire scene and all textures which are in graphics memory in RAM too for it to be possible.
    nVidia is not really doing that many samples, those added units do not have grunt, they do basic work and clean it. Same would apply to CPU. Except you would need 16C/32T CPU or more to have spare cycles for all other important stuff. + all that used memory bandwidth and latency requirements.

    It would be doable, but not very fast anyway, unless there was very optimized code which would have data blocks required always in CPU's caches in advance.

    Thing is that while information delivered by raytracing is not that large per frame, it requires quite some work and data movements.
    Theoretically if GTX 2080(Ti) was just regular GPU with nV-Link, then nVidia could made secondary accelerator with 8GB VRAM to fit required data, and optimize it for quick access and high bandwidth for all those new raytracing things.
    Final information which is used to enhance each frame could then be delivered to GPU. And there would be no additional performance impact as each card could run in parallel.

    Raytracing card would be funny thing, because instead of "rendering" at certain resolution, it would be putting pixels into vector space. (Virtually unlimited resolution.) And at time driver says "Stop, filter image out, resize to this, and send to GPU.", it would quickly finish.
     
    Last edited: Aug 30, 2018
    sverek likes this.
  12. sverek

    sverek Ancient Guru

    Messages:
    5,594
    Likes Received:
    2,458
    GPU:
    NOVIDIA -0.5GB
    I member some people had low-end Nvidia GPU in their PCs dedicated to perform PhysX calculations. Is it something similar to it?
     
  13. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,939
    Likes Received:
    2,292
    GPU:
    5700XT+AW@240Hz
    It would be, amount of data required to transfer between dedicated PhysX card and actual game engine is small. Similarly, if you have all data cloned in dedicated raytracing card, then actual per-frame-data which have to be transferred are small again.
     

Share This Page