1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Nvidia shows signs ...

Discussion in 'Videocards - NVIDIA GeForce' started by pharma, Sep 17, 2018.

  1. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Summit, the world’s fastest supercomputer Triples Its Performance Record

    DOING THE MATH: THE REALITY OF HPC AND AI CONVERGENCE
    June 17, 2019

    There is a more direct approach to converging HPC and AI, and that is to retrofit some of the matrix math libraries that are commonly used in HPC simulations so they can take advantage of dot product engines such as the Tensor Core units that are at the heart of the “Volta” Tesla GPU accelerators that are often at the heart of so-called AI supercomputers such as the “Summit” system at Oak Ridge National Laboratories.

    As it turns out, a team of researchers at the University of Tennessee, Oak Ridge. And the University of Manchester, led by Jack Dongarra, one of the creators of the Linpack and HPL benchmarks that are used to gauge the raw performance of supercomputers, have come up with a mixed precision interative refinement solver that can make use of the Tensor Core units inside the Volta and get raw HPC matrix math calculations like those at the heart of Linpack done quicker than if they used the 64-bit math units on the Volta.

    This underlying math that implements this iterative refinement approach that has been applied to the Tensor Core units is itself not new, and in fact it dates from the 1940s, according to Dongarra.
    ...
    The good news is that a new and improved iterative refinement technique is working pretty well by pushing the bulk of the math to the 4×4, 16-bit floating point Tensor Core engines and doing a little 32-bit accumulate and a tiny bit of 64-bit math on top of that to produce an equivalent result to what was produced using only 64-bit math units on the Volta GPU accelerator – but in a much shorter time.

    To put the iterative refinement solver to the test, techies at Nvidia worked with the team from Oak Ridge, the University of Tennessee, and the University of Manchester to port the HPL implementation of the Linpack benchmark, which is a 64-bit dense matrix calculation that is used by the Top500, to the new solver – creating what they are tentatively calling HPL-AI – and ran it both ways on the Summit supercomputer. The results were astoundingly good.

    Running regular HPL on the full Summit, that worked out to 148.8 petaflops of aggregate compute, and running the HPL-AI variant on the iterative refinement solver in mixed precision it works out to an aggregate of 445 petaflops.

    And to be super-precise, about 92 percent of the calculation time in the HPL-AI run was spent in the general matrix multiply (GEMM) library running in FP16 mode, with a little more than 7 percent of wall time being in the accumulate unit of the Tensor Core in FP32 mode and a little less than 1 percent stressing the 64-bit math units on Volta.

    Now, the trick is to apply this iterative refinement solver to real HPC applications, and Nvidia is going to be making it available in the CUDA-X software stack so this can be done. Hopefully more and more work can be moved to mixed precision and take full advantage of those Tensor Core units. It’s not quite like free performance – customers are definitely paying for those Tensor Cores on the Volta chips – but it will feel like it is free, and that means Nvidia is going to have an advantage in the HPC market unless and until both Intel and AMD add something like Tensor Core to their future GPU accelerators.

    [​IMG]

    https://www.nextplatform.com/2019/06/17/doing-the-math-the-reality-of-hpc-and-ai-convergence/






     
  2. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Monster Hunter DLSS implementation on July 17.
    July 13, 2019

    https://www.techspot.com/news/80937-nvidia-claims-50-percent-framerate-uplift-monster-hunter.html
     
    Last edited: Jul 20, 2019
  3. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    NVIDIA GeForce RTX 2060 Super Review
    July 16, 2019

    [​IMG]

    [​IMG]


    https://www.servethehome.com/nvidia-geforce-rtx-2060-super-review/5/
     
  4. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    JULY 11, 2019
    Color me impressed! ... The guy is obviously talented and guess if you have the right workshop anything is possible.
     
    Last edited: Jul 21, 2019
    endbase likes this.

  5. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Luxion Will Support NVIDIA’s RTX Ray Tracing And Denoising Acceleration In KeyShot 9
    July 29, 2019

    https://techgage.com/news/luxion-keyshot-9-nvidia-rtx-support/
     
  6. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Bright Memory, the Action Game Made by a Single Developer, Is Getting NVIDIA RTX Ray Tracing Soon



    https://wccftech.com/bright-memory-...loper-is-getting-nvidia-rtx-ray-tracing-soon/
     
  7. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    F1 2019 Patch 1.07 adds support for NVIDIA DLSS and AMD FidelityFX
    https://www.dsogaming.com/news/f1-2019-patch-1-07-adds-support-for-nvidia-dlss-and-amd-fidelityfx/
     
  8. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
  9. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Yea, apparently CodeMaster's fucked up with the F1 2019 DLSS portion.
    https://forums.codemasters.com/topic/41370-patch-notes-for-107-–-release-info-update-0608-ps4-and-xbox-one-now-live/?tab=comments#comment-451646
     

  10. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
  11. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,201
    Likes Received:
    138
    GPU:
    MSI GTX1070 GamingX
    Companies are always looking for ways to sell/promote their products/software/games etc.

    If your competitor supports RTX, then, you're going to be asking your team why your company's latest product/s doesn't.

    Look at COD and Doom Eternal...
     
    Maddness and pharma like this.
  12. sykozis

    sykozis Ancient Guru

    Messages:
    21,016
    Likes Received:
    668
    GPU:
    MSI RX5700
    RTX is a proprietary "tech"....so no competing product can support it. AMD and Intel will be making use of DXR for Ray-Tracing. AMD's current gen already supports DXR. Intel's first gen consumer product will also support DXR.
     
  13. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
  14. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,724
    Likes Received:
    177
    GPU:
    EVGA GTX 1080Ti SC
    RTX uses DXR for DX12 games. Battlefield V options refer to their implementation as DXR.
     

  15. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,201
    Likes Received:
    138
    GPU:
    MSI GTX1070 GamingX
    From the early RTX patch notes for Battlefield V, they suggested that they were waiting for others (AMD) to also implement DXR for other products, which hasn't happened yet.

    As we can see from the current situation; DXR maybe supported by AMD (and in future Intel gpus), but, AMD support hasn't actually been implemented in any commercial games yet. Where-as AMD are adamant that they won't support DXR until they can offer it across the full-range of their products.

    So, of-course, the only logical thing to do is implement RTX, as it's currently the only viable solution. While RTX cards adoption is still low; marketing-wise your RTX-supported product/s is still exposed to 70% (Nvidia) of the PC market mind-share..

    From Nvidia, the master-stroke was allowing DXR to run on Pascal (so users could see how bad it runs on last-gen cards). Very powerful psychological marketing going on here.

    Meanwhile, we can see the mind-set in youtube-land, going from "We don't need DXR." to "Why doesn't AMD's latest cards support DXR?".

    Consumers are obviously paying attention to these "early-rounds" of RT implementation and Nvidia looks like the only player in town right now. What I'm seeing is a lot of interest in RT/RTX, but, the products are a few steps away from being mainstream due to cost.

    However, we will see Nvidia in full-savage-mode when they're able to offer RTX to the mainstream with playable performance, backed-up with a catalogue of AA/AAA titles to help justify upgrading.
     
  16. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    At this stage there is Microsoft's DXR fallback layer that can be enabled at the driver level, though not sure why AMD has not enabled it.
    Emulation of ray-tracing though API on non supported gpu's would actually be viewed as a positive.
     
  17. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,201
    Likes Received:
    138
    GPU:
    MSI GTX1070 GamingX
    Nvidia can afford to do it for Pascal as they have the RTX-line to offer. AMD has no such products to offer, while allowing DXR to run on the current line-up may put AMD cards in a bad-light, especially the current RX5700/5800 range.

    I would hazard a guess that the latest AMD cards would run DXR very similarly to Nvidia's Pascal, rendering it pointless to even open this can-of-worms. There-by avoiding direct comparisons to RTX.
     
  18. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    August 13, 2019
    Grimmstar is an action-packed space fighter/simulator with Action RPG and Fleet Management twists. NVIDIA RTX lighting provides incredibly stunning and vibrant ray-traced visuals as you are pursued by an overwhelming force through multiple open-world star systems, fighting for humanity's survival.

    Command the last fleet of mankind while you defend the stranded remnants of the human race with your fully modular fighter, growing your forces along the way with the hope of eventually taking down the planet-devouring behemoth, the Grimmstar.
     
    Last edited: Aug 14, 2019
  19. pharma

    pharma Maha Guru

    Messages:
    1,085
    Likes Received:
    117
    GPU:
    Asus Strix GTX 1080
    Turning Up the Lights: Interactive Path Tracing Scenes from a Short Film
    August 13, 2019
    One problem: real-time renderers can’t afford to trace nearly as many rays as offline film-quality renderers, and path tracing typically requires many rays per pixel. RTX games shoot a handful of rays per pixel in the milliseconds available per frame; movies take minutes or hours to shoot thousands of rays. With low ray counts, path tracing gives characteristically grainy images. Essentially, each pixel has a different slice of information about the scene’s lighting. Ray traced games use sophisticated denoisers to remove this graininess, but path tracing complex dynamic content with a small number of rays presents additional denoising challenges.
    ...
    With path tracing, rays need to find their way to lights in the scene in order to model their illumination. It’s hard to choose the right rays for lighting in anything but the simplest scenes. Choosing the right rays is really hard in a scene with thousands of light sources. It’s really really hard in a scene with thousands of moving light sources.
    ...
    Therefore, the researchers decided to tackle a scene with thousands of moving light sources.

    They set out to render scenes from a short film that had previously only ever been rendered using offline renderers. The short film, Zero Day, shown above, was created by artist Mike Winkelmann. It holds many challenges for real-time physically-based rendering. The scenes in Zero Day are lit by 7,203 – 10,361 moving emissive triangles; there is a lot of fast-moving geometry (so the lighting changes a lot from frame to frame, which makes it hard to reuse information from previous frames); and there is a wide variety of material types

    The combination of shadows and reflections from 1,000s of fast-moving lights with shiny materials exceeded the capabilities of current real-time denoising algorithms. The team dug in and reinvented ray sampling algorithms and deep-learning image denoisers. Many ideas were tried; some worked, and some did not. In the end, they made breakthroughs in both ray sampling and denoising.



    The Measure 1 scene, rendered here with direct lighting (soft shadows) from 7,203 dynamic emissive triangles using 9 rays per pixel. The researchers render this at 20 frames per second using a new light importance sampling algorithm and prototype deep learning denoiser on Turing RTX 2080 Ti.



    The Measure 1 scene, path tracing direct and one bounce of indirect lighting from 7,203 moving emissive triangles using 4 paths per pixel (17 rays per pixel), denoised with a prototype deep learning denoiser. The video appears brighter than the one using only direct lighting due to the reflected light illuminating surfaces that would otherwise be in shadow.
    https://news.developer.nvidia.com/turning-up-the-lights-interactive-path-tracing-scenes-from-a-short-film/


     
    Last edited: Aug 14, 2019

Share This Page