Samsung Develops Industry First High Bandwidth Memory with AI Processing Power

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 17, 2021.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,392
    Likes Received:
    18,564
    GPU:
    AMD | NVIDIA
  2. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I wonder how that's secured. And I wonder how many more transistors are used by AI functions over memory cells.
    (How much more expensive this is per GB. What are available capacities.)

    In applications where there is use for it, I am sure it will pay itself off. But question is what kind of AI operations it can do. And how it will translate to Operations done by GPUs.
    Where I expect it will be highly irrelevant unless scale becomes more important than complexity and latency.
     
  3. nosirrahx

    nosirrahx Master Guru

    Messages:
    450
    Likes Received:
    139
    GPU:
    HD7700
    At the very least something like this could be implemented in AI upscaling or maybe even for using AI to generate entire intermediate frames. Having a chunk of very fast memory designed specifically for AI integrated into GPUs that are starting to handle AI tasks already seems like a natural progression. I can also see AI tricks to improve the look of older games. Imagine for an example an older game that was locked to 60FPS. Perhaps we will start seeing AI creating intermediary frames and increasing that to a fluid 120FPS without a single modification to the actual game.
     
  4. waltc3

    waltc3 Maha Guru

    Messages:
    1,445
    Likes Received:
    562
    GPU:
    AMD 50th Ann 5700XT
    Wake me when it ships...;)
     
    anticupidon likes this.

  5. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Problem is energy. In memory computing is so interesting, because it saves a lot of energy as data do not have to be moved into CPU/GPU for compatible tasks.

    But when only part of math can be done in memory, data will have to be moved into GPU for rest. That completely defeats purpose of in memory math.
    Same way as if you do a primary math in GPU (which means data were already read into GPU), and then moving results to memory for further processing and again back to GPU.

    In case of frame interpolation for doubling frame rate, what is needed is detection of motion vectors. Identifying data blocks to move. And moving them appropriately. (Basically doing work of video encoder/decoder.)

    If you could do that in memory, you'll have perfect product for streaming services. But would not that be same as keeping last few frames in GPU's cache. (Like Infinity Cache.) And let specialized video encoding/decoding HW do this job without ever moving it to video memory?

    And I was really surprised many years ago when people started to talk about frame rate doubling in TVs, that we did not have this as feature for poor man in GPU.
    Because in contrast to video stream, GPU has each frame available in lossless state.
    There are media players which do it easy way. They take motion vector data, half it for given frame and just put resulting motion in between. Analyzing motion vectors from multiple upcoming and past frames may be even used for creating motion splines instead of mere vectors. That would provide more accurate method and ability to deliver even more frames in between accurately.

    Now, while I do not like idea of fake frames, I am much less against doubling fps from 60 to 120, than I am against various upscaling methods.
    (Especially AMD's lazy man's approach which they delivered by purchasing HiAlgo and half assedly butchering their Boost method in worst possible way. At least they delivered their Chill method in acceptable way.)
    And I hope, that incoming update AMD makes for their Boost feature really changes most stupid approach you can think of.
     

Share This Page