Alienware Area 51 Ryzen Threadripper Benchmarks

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 7, 2017.

  1. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    It's just one of those tasks which never really get tested on these chips, and yet is perfectly suited for them. The focus always seems to be on rendering, gaming, and synthetic workloads. I've also seen reviewers trying to do a full-system load test by clumsily running a CPU synthetic test along with a game benchmark, but it would be much easier to use BOINC for it. They probably just don't know about it, but it would be a much easier way to test these systems, especially since it's a real-world app with real data.

    FYI, my upcoming Threadripper system will be dedicated to BOINC. I might also use it for some media consumption but I do not plan on gaming on it at all. If not for this, I would have no interest in TR. I know it's a niche use case, but HEDT itself is a niche market.

    My issue with BOINC regarding GPUs is the lack of OpenCL support. Almost all GPU projects support CUDA, but only a handful support OpenCL, which limits my vendor choices. I really wish this wasn't the case, as I prefer open standards over proprietary ones - just goes to show the dominance of Nvidia cards, even in computing apps.
     
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I completely agree. Not only is BOINC a good real-world test to push a CPU and its multitasking abilities to the limit, but its also useful to know what CPU is best suited for BOINC. I'm not aware of any source that gives comprehensive lists of hardware performance - most of them are difficult to read since you can't sort or filter them.
    And again, GPU testing would be useful too. I have a dedicated BOINC rig with 4x GPUs in it (involving both Nvidia and AMD) and it's hard to find evidence of which project is best suited for which GPU. For now, I've decided to select projects based on what they're best suited for. For example: gpugrid depends on CUDA, so I've got a Quadro 4000 dedicated to that, and milkyway@home has a strong dependence on FP64 so I have a V7900 dedicated to that.

    That's pretty generous of you. My BOINC rig is made entirely of retired/used parts (though, I think I may need to buy a brand new PSU for it soon). I use it for keeping my apartment warm during the winter, since it uses electric heat anyway. You should follow in my footsteps and use GPUs - you have all those PCIe lanes begging to be used (albeit, you could probably get by with a bunch of x4 slots - GPGPU tasks don't need much bandwidth).

    At this point most projects support OpenCL. I think the only one that adamantly refuses to support it is gpugrid. Whether or not other OpenCL projects are better than their CUDA counterparts is a different story; it's a little hard to gauge since Nvidia has an obvious bias against OpenCL. If you're interested, here are some OCL compatible projects:
    universe@home
    milkyway@home
    einstein@home
    seti@home
    folding@home (not a BOINC project)
    moo
    Collatz Conjecture
    primegrid
    sometimes World Community Grid
    And I think and BURP do too, but I'm not 100% sure.
     
    Last edited: Aug 7, 2017
  3. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    I usually select projects based on what they do, and what I can spare. It's usually a mix of WCG, Asteroids@Home, Einstein@Home and GPUGRID, using both the CPU and GPU(s). I've found different projects use resources differently. GPUGRID pushes a lot of heat but doesn't actually max out the GPU usage (I can run a game while computing), while Asteroids@Home maxes out the GPU but runs cooler. I've also noticed differences with CPU usages (CPU runs cooler running WCG compared to A@H), which may be down to the instruction set used, I guess. For testing purposes, using a mix of different projects would probably yield a good average.

    Oh, I use my GPUs as well. My Ryzen system has Nvidia cards and my Core i7 system has AMD ones, and I've set up projects which are applicable for each. I'm thinking of retiring my Intel system when I get TR (help cover the cost) and fitting them with the same GPUs - it'll be an all-AMD build then. ;)

    I currently contribute to Einstein@Home using my AMD cards, but not much else. I've thought of trying PrimeGrid but not sure if it's worth it. I wish WCG had a GPU app (I think they were testing this a few years ago but stopped). I was thinking of going back to F@H for my AMD GPUs as well - I originally started with it but switched to BOINC, as I preferred the interface.
     
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    In case you're not aware, you can actually have a single GPU run multiple projects simultaneously. That requires some tweaking of the config XML. Some GPU tasks just don't require each core in your GPU, so many are just sitting by idly. I personally stick with (by today's standards) mid-range workstation GPUs, since they're often single-slot cards (allowing you to easily install 4+ of them) and their resources can be saturated by a single task.
    I suspect you're right about the heat output - some tasks are just less complex per clock than others.

    I personally have no interest in PrimeGrid - I'd rather mine for Etherium. And yeah, it'd be nice if WCG got back into GPUs, as well as Rosetta. I wish gpugrid gave OpenCL another shot (they offered it at one point but I'm not sure what happened to it).
     

  5. Apatch

    Apatch Guest

    Messages:
    55
    Likes Received:
    0
    GPU:
    Asus Strix GTX1080 OC

    Tell it to my 2500k @4,1Gh when it is almost always at 100% CPU usage with 6 windows and 3 monitors (3440x1440+1440x2560+1200x1920), when minimised 20-60%, high res live charts i bet with a a lot of cpu/mem leaks don't make it better, and constant relogging/reloading presets for all those tabs don't make life any easier too - it's a too much time consuming. Any chrome extensions like The Great Suspender doesn't work like it should so this is not a solution, maybe creating VM just for browsing which could be suspended right before gaming or any more CPU consuming tasks will be better, even seconds in this matter are really important.

    BTW

    folding@home is not much different from mining now - foldingcoin.net, yeah first is altruistic , second is with some reward.
     
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    Well yeah, you're obviously going to get some serious load there. I was referring more to having 100 tabs (not windows) loading more typical webpages. Chrome does a pretty good job at reducing CPU load of tabs that aren't active.
    I haven't bothered checking with Chrome, but Firefox has built-in tab groups. From what I recall, you can have one of these groups "suspended" so they don't consume hardly any resources (if any) while inactive. I haven't used it in a couple years though so my memory of it is a little hazy, but it is very useful.
    If you use Linux, you can also try crypopid. It's kind of like hibernation except for just one program. Pretty cool to be able to pause and resume a task even after rebooting.

    Thanks for the heads up, I may look into it. For now, I prefer to avoid folding since (to my knowledge) it isn't a BOINC task. I prefer BOINC-only, since it makes resource and task management easier. Also in case you're not aware, gridcoin is a similar concept.
     
    Last edited: Aug 7, 2017
  7. RzrTrek

    RzrTrek Guest

    Messages:
    2,547
    Likes Received:
    741
    GPU:
    -
    I was talking in general terms of Ryzen's overall gaming performance where Threadripper is no exception.
     
  8. Picolete

    Picolete Master Guru

    Messages:
    494
    Likes Received:
    261
    GPU:
    Sapphire Pulse 6800
    What i´m saying is that it seem they tested in 4k instead of 1080, they get limited by the GPU at that resolution and not by the CPU
     
  9. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    lol "where it counts", that's a good one. Because, you know, people buying 10+ core Intel CPUs are getting it for gaming, totes dude, totes.

    And i'm sorry, but with technology, the future is always what matters. If you're not ready for the future, you're dead in the water. Being the best today, only means you're crap for tomorrow, if you didn't prepare for the future.

    Now is definitely not what matters, even remotely.
     
    Last edited: Aug 7, 2017
  10. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    Hmm, haven't heard of that before. I'll look into it - the more I can get out of my system, the better. Thanks.

    Again, you are missing the point. Would you judge a 7700K on its Cinebench or Blender results? Obviously not, since that's not what the 7700K was designed for, nor excels at. Criticizing Threadripper for its gaming performance is like criticizing the 7700K because it sucks at Cinebench. To the people who buy these chips, it doesn't matter.

    As a gamer, you probably think gaming performance is what matters the most. Most people who buy these chips prioritize other tasks though, and you should be cognizant of that. Frankly, I have no plans to install Steam on my TR system, nor do I plan to run a single game or game benchmark on it. The only number I care about is my grid computing numbers - as long as I get the expected results there, I'm happy. :)
     

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    I don't think he's ever going to get it through his head. People like RzrTrek seem to have this idea that Intel = Good, AMD = Bad, and lets decide to compare apples to oranges to prove it.

    They'll never understand, or criticize, that their precious 7700k, beats out Intels $2000 processor in what they care about as well. But they have no care for that, they don't fault Intel for that, they only fault AMD.

    They don't compare the 7700k, to AMDs 4c/8t processors, of which yes, intel wins for $150 more in price. But they don't care, they only care that the 7700k beats out AMDs $500 CPU, of which AMDs $500 CPU beats/matches and sometimes loses to Intels 800-1000$ CPUs. But again, same can be said about the 7700k, in the things they care about, the 7700k beats Intels 800-2000$ CPUs, but they'll never fault Intel for that, only AMD.

    TDLR; They don't compare correct CPUs against eachother, to make sure that Intel looks better, and they completely and totally refuse to acknowledge Intels higher end processors not "beating" their precious 7700k in an area "they" think matters most.
     
    Last edited: Aug 8, 2017
  12. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,551
    Likes Received:
    608
    GPU:
    6800 XT
    Overall gaming performance is great. Not best but great. So it's fine. You get similar experience with cheaper Ryzen system compared to 7700k. Unless you put it on 240hz display most likely.
     
  13. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    15,140
    Likes Received:
    1,743
    GPU:
    GTX 1080 Ti
    Please guys keep this conversation civil, and don't point anyone out. One and only warning.

    No but like it was said before(very rudely by a few members might I add) these chips are not geared towards gamers first, we're actually the minority. The advertising from AMD seems to look like gamers are first, but that's just AMD using "edgy" advertisement techniques. Even still though, gaming performance is not far off for Ryzen at least. And even in some cases, and not just counting gaming Ryzen does match or excel Intel at this. I just think Intel has too big of a base to actually fall to Ryzen though, but Ryzen definitely is doing some damage in that aspect.
     

Share This Page