Micron Starts Volume Production of 1z (13 to 10 nm) Nanometer DRAM Process Node

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 16, 2019.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    36,341
    Likes Received:
    5,365
    GPU:
    AMD | NVIDIA
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,514
    Likes Received:
    1,395
    GPU:
    HIS R9 290
    With such advances in memory, I personally would really like to see nothing but SO-DIMMs for DDR5. We're reaching at a point where these full-length DIMMs are unnecessary. They just needlessly take up more space on the motherboard. I'm actually a little surprised server motherboards haven't started to switch to ECC SO-DIMMs, now that we're getting up to 16 channels of memory on a single board.
     
    CPC_RedDawn likes this.
  3. craycray

    craycray Active Member

    Messages:
    82
    Likes Received:
    21
    GPU:
    1080ti SC2
    I think it is 16Gb = 16 GigaBits, not 16 GigaBytes @hh
     
  4. vdelvec

    vdelvec Member Guru

    Messages:
    151
    Likes Received:
    15
    GPU:
    Nvidia RTX TITAN
    Nah, I think it would be cooler if RAM modules tripled in length and height. More surface space for LEDs & RGB. Mwahahahahaha!!
     

  5. angelgraves13

    angelgraves13 Maha Guru

    Messages:
    1,467
    Likes Received:
    334
    GPU:
    RTX 2080 Ti FE
    I think it’s time that GPUs got a socket on a motherboard instead of this dedicated card taking up slots.

    It would be easier to cool just like a cpu
     
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    3,737
    Likes Received:
    1,014
    GPU:
    GTX 1080ti
    no point, you'd change motherboard every gpu generation, and no it wouldn't be easier to cool.
     
  7. angelgraves13

    angelgraves13 Maha Guru

    Messages:
    1,467
    Likes Received:
    334
    GPU:
    RTX 2080 Ti FE
    But the socket would remain the same. Yes it would be easier to cool. AIO closed loop with a radiator.
     
  8. Astyanax

    Astyanax Ancient Guru

    Messages:
    3,737
    Likes Received:
    1,014
    GPU:
    GTX 1080ti
    thats some technical ignorance on your part.

    384 bit gpu's need more pins than 256bit gpu's for one.
    then theres physical feature addition, for example turing added additional pins for the usbc port and power input for that.
     
  9. angelgraves13

    angelgraves13 Maha Guru

    Messages:
    1,467
    Likes Received:
    334
    GPU:
    RTX 2080 Ti FE
    Are you really going to argue over specs of a non existent socket? They both access the same pci express cpu lanes.
     
  10. Astyanax

    Astyanax Ancient Guru

    Messages:
    3,737
    Likes Received:
    1,014
    GPU:
    GTX 1080ti
    over simplification is only something someone who doesn't "know" would do and then pull the "are you going to argue X/Y"


    MXM is a thing, and even those are not compatible with anything but the notebook they were designed for.
     

  11. icedman

    icedman Master Guru

    Messages:
    911
    Likes Received:
    66
    GPU:
    MSI Duke GTX 1080
    There are many reasons for not having GPU sockets too many things change from generation to generation for that to be possible also
    this doesn't make sense what u just said is we may as well just keep using the pci-e socket which is the the current standard????
     
  12. SweenJM

    SweenJM Master Guru

    Messages:
    591
    Likes Received:
    301
    GPU:
    Sapphire 590 nitro
    I believe the ultimate goal is to consolidate as many features as possible into each chip, so as to have as few components as possible. We are seeing memory and storage move closer together all the time, as well as cpus and gpus. It was just a few years ago that onboard graphics were a thing, now virtually all igpu is on cpu. Traditional northbridges (external memory controller) haven't been used for a while. I believe that southbridge (storage/bus controller) will be on cpu in the next two or three generations of hardware. Once they have made the speed difference between dram and flashmem negligible (or non-existant) they will begin to merge storage and memory, and we have already seen some effort to move the memory to the chip (hbm).
    Yeah, they are gonna have to do something like that at some point. It would make a lot of sense (until we get mem/storage aio wondercubes).

    that is a hard thing to guarantee from chip to chip, and manufacturer to the next, which is one of the reasons for the traditional expansion slot. It's not a bad idea, but would require a total rework of existing standards (which will happen eventually anyways). I think we may see the resizing of memory modules as a standard (as schmidt suggested) before the total deletion of the expansion slot as we know it for add-in cards of any kind. Also, the aio closed-loop liquid coolers are only a good solution for a limited number of cooling circumstances....gonna have to make our mythical socket gpu air-coolable for mainstream adoption. This is all just my opinion, of course.
     
    Last edited: Aug 17, 2019
  13. EspHack

    EspHack Ancient Guru

    Messages:
    2,439
    Likes Received:
    34
    GPU:
    ATI/HD5770/1GB
    thats like saying "thanks to smaller bezels, now big bezel 50" screens are unnecessarily big" yeah sure, but now you can fit a 60" screen on that old big bezel 50" footprint, and so it goes

    so your phone now has 32gb ram? great, means now that you're the mainstream, i need to quadruple that to be worry free

    on the other hand, SODIMM only but full ATX can have 8 DIMMs? i guess that way we all get what we want
     
  14. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    7,718
    Likes Received:
    207
    GPU:
    Zotac GTX1080Ti AMP
    You would run into massive memory bandwidth issues. This is why we have add in boards, they have custom memory chips and memory channels. Not to mention the cards have their own power delivery. All of this would now have to move to the motherboard it self. Which would take up too much room and basically get in the way of other PCIE slots, the chipset, sata, and sound chips.

    Adding in another socket for a GPU would basically completely get rid of ITX and probably even mATX boards.
     
  15. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,514
    Likes Received:
    1,395
    GPU:
    HIS R9 290
    And? You say that like it's a bad thing. If you can fit more of what you bought in the same amount of physical space with no negative side effects, what exactly are you complaining about? If you can fit more pixels in the same footprint, why wouldn't you? If you can fit more memory in the same footprint, why wouldn't you? My point is full-size DIMMs are physically larger than they need to be.
    Huh? Not sure what you're getting at here...
    Phones don't use DIMMs, and just because something becomes mainstream, that doesn't mean you have worry about out-pacing it; that's why it's called mainstream and not cutting-edge. Mainstream standards aren't meant to be obsoleted so easily. Besides, RAM usage for day to day applications hasn't changed much in years. The average user can still easily get by on 8GB, even with Chrome. If you aren't much of a multitasker, 4GB is still enough.
     

  16. angelgraves13

    angelgraves13 Maha Guru

    Messages:
    1,467
    Likes Received:
    334
    GPU:
    RTX 2080 Ti FE
    Well for this to be possible, the memory would have to be stacked on the GPU. Also, we'd likely have to move to 64-pin ATX or whatever they'd decide to call it.
     
    CPC_RedDawn likes this.
  17. SweenJM

    SweenJM Master Guru

    Messages:
    591
    Likes Received:
    301
    GPU:
    Sapphire 590 nitro
    What would be involved in the change from dimm to so-dimm for non-laptops? Obviously the slots themselves, and the wiring for them (i guess the pin count from ddr4 to 5 isnt going to change for so-dimm or dimm)...but are there any other considerations for making ddr5 all so-dimm? Any performance considerations, or challenges to overcome with the manufacture of the motherboards? It would seem a logical step, and yet they haven't done it yet...i wonder why.
     
    Last edited: Aug 18, 2019
  18. nevcairiel

    nevcairiel Master Guru

    Messages:
    614
    Likes Received:
    199
    GPU:
    MSI 1080 Gaming X
    SO-DIMM mounting is limiting. They are typically mounted side-ways in laptops due to height constraints, which means if you have two slots, they would overlap each other. This layout is very prohibitive and would probably be very impractical for more then 2 SO-DIMM slots.
    This side-ways layout is also not ideal for heat dissipation on higher-end modules, as air flow would be rather restricted. And mounting the same modules standing up would be problematic due to height constraints with CPU coolers, since SO-DIMM modules will offset the lack of width by more height.

    The standard DIMM slots are quite fitting for desktop or server systems, since the width is not a problem and they easily scale in number or slots and keep a low height profile to no obstruct other components.
     
  19. SweenJM

    SweenJM Master Guru

    Messages:
    591
    Likes Received:
    301
    GPU:
    Sapphire 590 nitro
    But they do have vertical so-dimm slots, which work basically the same as regular dimm slots. As for the height problem....that is really only a problem with the enthusiast crowd. Any of your stock or oem cooling solutions won't have the same problem.....and that is what matters to the industry at large.
     
  20. Exodite

    Exodite Ancient Guru

    Messages:
    1,913
    Likes Received:
    171
    GPU:
    Sapphire Vega 56
    I've been thinking about similar issues for a while and frankly I feel the solution is the opposite way around, ie. mounting the CPU and memory slots on a daughter board.

    Coming from the Amiga that was the solution that allowed relatively static hardware to accept ever newer upgrades. Today, in comparison, the modular PC we all love has been hampered by developments in processor and memory standards while the base interfacing technology has been standardized and stagnant. The GPU is actually using a better solution, allowing for any physical design of the chips and having local memory connected to the system through a standardized interface rather than suffering from staggered generation upgrades (CPU sockets, memory interfaces).

    If we assume that current trends with tying memory ever closer to the processing cores to maximize efficiency, whether it's something like HBM, on-package or on-die solutions it would only get easier to implement on a daughter board.

    *shrug* I mean it would to a large extent murder the current business model of motherboard manufacturers and chipmakers both, as well as requiring a genuine generational shift away from ATX, so I can see why it wouldn't happen but in my mind it's the CPU attachment mechanism of current motherboards that's flawed.
     

Share This Page