AMD Ryzen 4000 Pro 4350G , 4650G and Ryzen 7 4750G APUs Pop Up at Distributor

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 8, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,398
    Likes Received:
    18,573
    GPU:
    AMD | NVIDIA
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Great to see they've gone up to 8c/16t, though I'm most curious about the GPU specs. If they stick with Vega graphics, they're going to have to do something about the memory bandwidth. The larger L3 might help a little bit but I don't think that's enough.
     
  3. Kaill

    Kaill Member Guru

    Messages:
    121
    Likes Received:
    15
    GPU:
    EVGA GTX 1080 FTW
    So with these coming out, how will this effect the 4200g (3200g) and 4400G (3400g) variants are they going to be boosted up to ryzen 9 versions? as they usually have a higher voltage.
     
  4. Truder

    Truder Ancient Guru

    Messages:
    2,392
    Likes Received:
    1,426
    GPU:
    RX 6700XT Nitro+
    With it being Zen 2, you'll be able to use faster RAM too so that should also help, unlike Zen/Zen+ that had the difficulties beforehand.
     

  5. user1

    user1 Ancient Guru

    Messages:
    2,748
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    will be very curious to see how memory overclocking effects igpu performance on consumer variants of these apus.
     
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Yes, that definitely will be a major help, but it's still unlikely to be enough. These Zen2 models do have a substantial increase in L3, but again, that too isn't going to be enough.
     
  7. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    Hooray, Finally some good new processors to make a few new email computers for around the house.
    The new chips* only have 12mb max L3 cache, unlike the 3xxx series Matisse chips. The L3 they have though, is available with less latency per-8-cores / 16-threads due to not having to 'hop' past the divider for the L3 cache like on 6-core or above Matisse designs, where 16mb L3 was allotted in 4-core groups.
    So if you have software that uses 8 real cores, it will run faster because it will process quicker and also sync faster with other threads on the same app due to having less latency and less time wasted by the core as the data it needs or is done with is hopping around the chip. This will also be the case for 4xxx series non-apu designs, though apps using more than 8-cores will still hit a latency penalty of some sort due to the core design.
    As you go higher though on intel chips conversely, you have basically the same type of interconnect (in place of single and dual ring-bus designs), so it's not a big deal, it's just something one must deal with when running mega-core-count chips over 8-cores.
    Mesh designs are advantageous if you don't have slow memory, and you actually USE 8 or more threads, as that would saturate single ring bus and eventually dual-ring bus past 8 cores if they are very busy.
    Even in day to day compute, I could feel a big difference partly due to copious amounts of l3 cache in my 3700x over my old 4790k (that was unpatched), and now to this 3950x I put in a week ago. Content creation such as building stuff with World Machine makes a huge difference when having lots of cores.
    12mb of lower latency cache VS 32mb of higher latency cache will do just fine, it's a pretty even trade-off, and generally works out for the better unless your app is cache-starved and using a heck of a lot of cores.
    To AMD's credit: the on-chip Matisse level 3 cache, even with the latency complaints going around, is usually consistently faster than the intel chips I've tested here, so the latency cries alone are pretty much solved with Matisse and even better on 4xxx chips, apu's or not apu's. They still have much room for improvement in future generations though, and I don't expect AMD to sit on it's laurels anytime soon.
    *XT models still have the same amount of cache as other Matisse models, and are just slightly better-binned chips. This is listed for clarification purposes and is different from the G-series APU chips.
    3 Cheers for competition in the CPU market before we all get too old to build pc's!
     
  8. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    @bobblunderton
    Don't get me wrong, for CPUs, the bigger L3 makes a noticeable difference. But we're talking about an APU here. That L3 cache is nowhere near enough to feed a GPU. It will make a difference but the GPU is still going to be bottlenecked by memory bandwidth.

    I hope AMD adds more memory channels for AM5 because I feel the jump to DDR5 isn't going to be sufficient.
     
  9. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    Normally I would say we had no chance for AMD to add more memory channels to AM5 platform, however, with TRX80 chip set coming out with 8-channel memory, we just may get lucky. I still highly doubt it though.
    You want more memory on the CPU to feed the graphics hardware inside of it, that's really only possible on HBM-style setups currently, otherwise, you end up a whole product stack too close together to make sense, full of mostly-salvaged parts. 128mb L3 cache issues on Broadwell were rather serious from what I remember hearing from those in manufacturing, in the yields department anyways. The bigger you make the cache, the higher the chance of defect and hence higher the cost, they'd likely price themselves out of the very market they target. It wouldn't surprise me if these chips were made with 16mb of L3 cache with a small amount (4mb) left to the side for defect padding, to make sure more chips meet criteria and such for market.
    Yes, 100% agree it's going to be bandwidth starved when going for memory, as they've almost always been. That is what sells dGPU's though. That being said I'd absolutely love having 2070 Super class performance (that's the gpu in here) on my 3950x, I'd be absolutely thrilled with it. Can't have everything we want, though.
     
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Same, but even just a 3rd memory channel would make all the difference. Any more than 4 would be pointless, because one of the most appealing things about an APU is compactness, and you're not going to be able to easily fit 4 channels of memory on an ITX motherboard, for example (without making serious sacrifices). Even SO-DIMMs might be tricky to fit.
    I agree with all of that, though it is worth pointing out the performance improvements on Broadwell were tremendous. Really the point I'm trying to drive home here is AMD can't make progress until they address the bandwidth issue. Personally, I think the most realistic option is to have an optional software tool that can heavily compress game asset data. Depending on the game, the decompression overhead would be less than the memory bandwidth bottleneck. Seems no matter how much you overclock the RAM, the frame rate goes up proportionately, suggesting the GPU is just sitting there doing nothing most of the time.
    Well yeah but what I'm getting at here is not even their worst iGPUs have enough bandwidth. We're nowhere close to having 2070 Super performance an an iGPU when we're not even getting Vega 8 performance out of an APU that comes with a Vega 8. And that's the problem I'm trying to address here: we're not going to see more progress in APUs until memory bandwidth isn't such a serious problem.
     

  11. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    Game assets are normally compressed during development, sometimes at run-time. Things like compressing textures to .dds formats can be done during development, and compressing models can be done around when the shaders are written at run time. These things are already normally done in most games - not all - but most games they are. I do game development, so yes, pays to know these things.
    I wish I could put into words how [censored] difficult it is to fit a modern city into 8gb of VRAM and not have every block repeat like a chase scene in a Hanna-Barberra cartoon. Let's just say it's really difficult. You have to pull out all the stops you can figure out to get it looking even half decent.
    The rest of what you said, yeah I'm not going to argue that one. Heck even my RX 480 was memory bandwidth starved with 256-bit link on GDDR5, dunno if the 2070 Super is as I haven't tried, just left it at stock settings. They can help APUs by working out the latency on getting to system RAM once the cache is full, but really, yes we DO need a better link there... however, to that end, you might as well just put it all on a card so that the price is reasonable.
    DDR5 memory is supposed to be here around 18 months from now, so maybe it'll help, though I wouldn't expect it to be in OEM PC's until a few months after that, up to a year. I have absolutely no clue what the dates on release are looking like, what will come when, it's much too early to tell.
    You'd likely need a much better interface, likely proprietary (to loosen design restrictions), or a very high pin-count cpu socket (look at all the pins on the gpu core on the back of a video card) to enable a higher link to some very expensive memory that doesn't exist yet. That complexity gets expensive really fast so you'd end up right back to where you started, get a dGPU. Consoles did it by putting a decently powerful gpu on the cpu, and using GDDR5 for memory on the entire system... so it can be done. It just wouldn't work with any/all the motherboards out there right now.
     
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    I know that, but I'm saying to compress them even further, at the driver level; even lossy compression. I think some people would be fine with imperfect textures if it meant a significantly smoother experience at 1080p. After all, DLSS gave a pretty "lossy" appearance and some people were ok with it. Modern games soak up a lot of VRAM, hence your next point:
    You don't have to have every block repeat, but you don't need them to look wholly unique either if you just want a pleasant experience. If someone is willing to settle for 720p at low settings, that suggests that 8GB of VRAM isn't warranted for their needs. The thing is, even at 720p+low, you still won't even get 60FPS in some cases.
    And that's the crux of the matter - there is [currently] no realistic way to make a half-decent iGPU perform optimally, so what's the point of releasing APUs with something more powerful than a Vega 8? It's not practical to add more memory channels. There are limits to memory speed, and faster memory makes the cost advantage moot. Adding HBM2 to the die defeats the purpose of having an iGPU. The only software solution involves sacrificing visual quality; perhaps a lot of it.
    To clarify, iGPUs are a good idea - they're a great option for office PCs and HTPCs, where you don't need a whole lot of memory bandwidth for them to be useful. But I don't see how they're ever going to keep up if there isn't a drastic change to memory bandwidth.
     
  13. 0blivious

    0blivious Ancient Guru

    Messages:
    3,301
    Likes Received:
    824
    GPU:
    7800 XT / 5700 XT
    I don't think anyone should be buying these things FOR gaming, but these are so much better than what we used to get as base graphics solutions in cheap laptops and PCs.

    I don't really game on laptops (and rarely even use my laptops) but it's still nice to have the option to do some gaming. Vega gives you that. Going from a UHD620 to Vega 6 in my new laptop has opened up a huge library of now playable stuff. I tested some newer titles (I have no intention of playing on laptop) just to get a sense of what this budget APU could do. On the CPU side, this 4500U basically matches my 4790K, which I find incredible in a lightweight, cheap laptop. GPU-wise, it's not far behind a 2gb 940MX, about 10-15%. And it does so without sounding like a jet engine under load. Overall, it's pretty impressive.

    4500U Vega 6 w/ 16GB dual channel 3200mhz-22 (20.4.1)
    CPUZ benchmark - 489 / 2678
    Cinebench 15 - 890 // 54.9 FPS
    Cinebench 20 - 2245
    V-RAY 4.1 - 4990 // 27
    Skydiver - 9565
    Unigine Superposition - 1715 (default setting, 1080p) [13 FPS avg]
    Unigine Valley - 1457 (basic 720p setting) [35 FPS avg]
    Game FPS:
    CS:GO [max settings, 1080p] avg 46//low 29
    Super Mega Baseball 3 [max,1080p] avg 45//low 32 <----this actually played/looked really good. It's staying.
    Insurgency [max settings,1080p] avg 51//35
    STALKER - SOC [max settings, 1080p] avg 50//low 29
    Civilization VI bench [low settings,1080p] avg 58//low 45
    Civilization VI - Average turn time: 7.84 seconds <----------------(!)
    Assassin's Creed Oddysey bench[low settings,720p] avg 33//low 17
    The Witcher 3 [low settings,720p] avg 35//low 27 <-----------unplayable, but impressive.

    Games without internal benches were 3 minute gaming runs with fraps keeping score of FPS (lows are absolute, not "1%").
    I tried drivers 20.5.1 and 20.7.1 but they give 3-5% lower performance than 20.4.1 does.
     
    Last edited: Jul 12, 2020

Share This Page