Guru3D Folding@Home Information/Help Thread {Start Here New Users}

Discussion in 'Folding@Home - Join Team Guru3D !' started by aircool, Aug 12, 2007.

  1. PantherX

    PantherX Folding@Home Team Leader Staff Member

    Messages:
    1,364
    Likes Received:
    0
    GPU:
    Gigabyte GTX 1080 Ti
    If those calculations are from V7, then please be aware of the PPD/TPF bug (Ticket #395)

    I have read that the PPD is higher than the usual SMP WUs. Not sure if those systems were dedicated or not.
     
  2. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    For that project Athlonite mentioned, on my 2500K system the PPD is higher than, say, 7611 or 7200.
    With 2500K @4.3GHz, TPF is ~48 secs., PPD...I think it's 20K-ish.

    Though I do feel the bonus points (for 7610, 7611 and 7200) are a bit lesser than before.
     
  3. Athlonite

    Athlonite Maha Guru

    Messages:
    1,127
    Likes Received:
    0
    GPU:
    Nitro+RX580 8GB OC
    it should be higher though as they're much smaller WU's at 250000 steps vs say 5000000 steps for normal SMP WU's

    NVM back to normal SMP WU's now :)
     
    Last edited: Dec 31, 2011
  4. Athlonite

    Athlonite Maha Guru

    Messages:
    1,127
    Likes Received:
    0
    GPU:
    Nitro+RX580 8GB OC
    hmmmm EOC is telling me that I have points for 3 WU's out of 68 WU's done and completed and returned with no error WTF where the rest of them
     

  5. iancook221188

    iancook221188 Ancient Guru

    Messages:
    1,716
    Likes Received:
    0
    GPU:
    GTX 670 SLI / GTX 460 SLI
    i dont see that have you made any client change?
     
  6. Athlonite

    Athlonite Maha Guru

    Messages:
    1,127
    Likes Received:
    0
    GPU:
    Nitro+RX580 8GB OC
    No config change except for driver everything else is the same infact I installed over the top of 7.1.48 and everything worked fine

    but going through my F@H log I counted 68 WU's completed and returned but EOC and Stamford was only showing I had completed 3 (it's now at 5) but thats a long way off of what I'd counted in the log

    it shows the only errors where some clock skew error:
    13:01:22:WARNING:WU00:FS02:Detected clock skew, adjusting time estimates
     
  7. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    Are you sure those other WUs were finished before its expiration deadline?

    In any case, if possible, upload the log file which contains these uncounted finished WUs.
    Don't forget to redact the user name within the log file in case you want to.
    (Notepad in Win 7 should be able to do just that)
    Also, you might want to report this in F@H Forum.
     
  8. Athlonite

    Athlonite Maha Guru

    Messages:
    1,127
    Likes Received:
    0
    GPU:
    Nitro+RX580 8GB OC
    here's the log it may not be complete but you'll get the idea

    https://skydrive.live.com/redir.aspx?cid=c5d80c59a9cd9bb7&resid=C5D80C59A9CD9BB7!346&parid=C5D80C59A9CD9BB7!136

    EOC now says I have done 7 WU's


    This this for the SMP WU's

    02:23:51:Unit 02: 32.27%
    02:23:57:Unit 02: 49.43%
    02:24:03:Unit 02: 66.60%
    02:24:09:Unit 02: 83.76%
    02:24:15:Unit 02: Upload complete
    02:24:15:Server responded WORK_ACK (400)
    02:24:15:Final credit estimate, 793.00 points

    and for the GPU WU's I see this

    07:54:24:Unit 01: 15.17%
    07:54:30:Unit 01: 23.59%
    07:54:36:Unit 01: 32.00%
    07:54:42:Unit 01: 40.24%
    07:54:48:Unit 01: 48.49%
    07:54:54:Unit 01: 56.90%
    07:55:00:Unit 01: 65.31%
    07:55:06:Unit 01: 73.07%
    07:55:12:Unit 01: 81.48%
    07:55:18:Unit 01: 89.89%
    07:55:24:Unit 01: 98.14%
    07:55:26:Unit 01: Upload complete
    07:55:26:Server responded WORK_ACK (400)
    07:55:26:Final credit estimate, 1835.00 points
     
  9. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering
    I think I need a little help from PanterX or someone else that can walk me through some optimization.

    I currently am just running the SMP client on my i7 3820, and am pulling roughly 18,000 to 20,000 PPD, but I'm wondering if there's a little more that I could do on my own rig here.
    I've already tried the GPU client for AMD, but the wattage used (~450W constantly) weighed heavily compared to the extra 10,000 PPD.
    I just need to know if there's anything more that I can do on my own rig.



    Psychlone
     
    Last edited: Sep 5, 2012
  10. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    Aside from OC-ing your CPU more or getting an extreme part? Unlikely.
    Of course, there's also the option of getting another system which could be identical only on processor alone, and getting non-enthusiast parts for the rest of it...but I don't think the added wattage would be smaller than if you just using your card.

    Wattage will always be a problem, especially if you put GPU into the equation.
    Even if you end up utilizing your card, you have to keep in mind that you'll lose around two active CPU cores to the card since it's dual AMD GPUs.
     

  11. sykozis

    sykozis Ancient Guru

    Messages:
    20,009
    Likes Received:
    29
    GPU:
    XFX RX 470
    My CPU isn't getting enough PPD to even make it worthwhile to keep it going.... Thinking about putting the 560Ti back in....again....and letting it chew through another couple dozen. I need a better PSU soon....
     
  12. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering
    OK, so what if I wanted to try a different GPU client? I know literally NOTHING about this anymore since there's a GUI now and everything is different than it was a few years back.
    I tried just adding GPU0 and GPU1 to my current setup and resulted in the higher wattage (which is clearly expected)... but what about running a separate client? I did a little reading on the GPU Beta 3 client, but it's over 2 years old now - are ALL the clients inter-related and can be used from within the single client control now??

    Just looking to do a little more...


    Psychlone
     
  13. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    Ever since v7, everything is under one GUI.
    You just set what kind of client to use for a slot, and the GUI will do the rest, such as downloading the proper client for the slot.
    e.g. if you set a GPU slot, it'll automatically download the proper (usually the latest) client for the GPU you're using; in your case, the OpenCL-based client for AMD GPUs.

    You can still manually set separate clients using CLI like before, I think...but CMIIW.
    Haven't done that for quite awhile, to be honest, ever since trying v7 betas.

    Not really "inter-related", just being managed under one GUI.
    They're still separate clients (slots) running on their own, working on their own WUs.
     
  14. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering
    Got it. Thank you for your input!


    Psychlone
     
  15. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering
    Quick question that I haven't found a good answer for, but I think I already understand (just need clarification)

    I started a WU on GPU1 of my 5970 and left GPU0 alone. It increased my watts used to 405 from ~350/360, which isn't much, but my PPD went WAAAY down from ~20,000 to just a hair under 12,000.
    The SMP client has completed more than 1 unit, but the GPU picked up a big one and will take some time before it's completed.
    I can stop the GPU client since it's only a few percentage into it - they're not losing much if I pull the plug on it, but I don't fully understand the implications of why my PPD dropped so significantly by ADDING a client.

    My SMP client is pushing a 9188 credit WU as I write this, and the GPU client is only pushing an 1825 credit WU but is going to take 1 full day longer.


    Is this because each GPU client also requires part of the actual CPU cores to function, and where they're already crunching numbers in their own SMP client, the SMP client takes precedence over the GPU and forces it to a lower priority, thus making the GPU client ???



    Thanks for all the help. I want to do what I can! I'm all-in.

    Psychlone


    ***EDIT: Just opened up HFM and saw my PPD values were way different than the console is displaying.
    The SMP client is pushing 4114 PPD and will be done in an hour.
    The GPU client is pushing 7585 PPD and will be done in a day.

    So the PPD values are way off... Do you think I should let the GPU client fininsh it's work on this WU or just ditch it in favor of the quicker SMP on my rig and quit worrying about GPU at all?

    And why is it that AMD just doesn't crunch numbers as good as nVidia? No one has ever really explained why - they're clearly different approaches to the same problem, they're BOTH clearly parallel-processing, and they're both highly optimized for their jobs... so why the bias towards nVidia?
     
    Last edited: Sep 6, 2012

  16. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    When you only have one slot (e.g. the SMP client) working, then the approximate PPD has only one variable to average from, which is the SMP client's PPD.
    If you have two slots (e.g. SMP & GPU) and one being stopped halfway, the GUI will still average from both PPDs, which one of them will have decreasing PPD overtime and thus dragging the average PPD for both slots smaller and smaller.

    So on and so forth.

    Keep in mind that SMP WUs employ bonus points that are derived from how fast you can finish the WU, while points for GPU WUs are static.
    This is probably why including GPU WUs to the mix impact the average PPD being shown by the GUI more than SMP WUs, because there's no bonus points from GPU WUs.

    CMIIW, though.
    These are purely from my observations so far.
    You'd probably get more concrete answers in F@H forum.

    IIRC the GPU client has a higher priority than the SMP client (in terms of CPU usage), so the SMP client shouldn't be taking over the GPU client's "needs", so to speak...but maybe PantherX can clarify this.

    It's also possible at the time there is something else using the GPU, such as watching a video using DXVA for example.

    AFAIK the GUI approximates each slot's (client's) PPD by how fast you're getting to each 1% checkpoints for that single WU being worked on, but HFM averages from the WUs you've finished before...though I'm probably mistaken.

    Personally I'd wait at least a day, but it's entirely up to you.
    If you're that curious then wait.
    If you're more concerned about electricity bill, ditch the GPU slot, but it's recommended to at least finish its current WU first.

    nVidia GPU client uses CUDA, and AMD GPU client uses OpenCL.
    It just so happens that the OpenCL-based client has to use some CPU resources.

    Personally, I don't think of it as a bias, but probably either a limited understanding of OpenCL (from the client's coder) or the limitation of OpenCL itself...or both.
    nVidia is also known to be more pro-active in helping developers than AMD in order to implement their standard, irrespective of whether it's proprietary (CUDA) or not.
    The last thing I heard from Stanford (in F@H forum) is that there are simply not much resources to develop and/or support AMD GPU client further.
    Also the reason why there is no way to fold with GPU in GNU/Linux if you have an AMD card, at least back then when I've finally lost interest in folding in Linux Mint.
     
  17. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering
    Thank you for breaking all that down for me. I had a bit of a grasp, but really didn't understand a chunk of it until now.

    I'll go ahead and let the GPU client finish just so it's not gone to waste... it's several more percent into it now anyway... it's only a day.
    I'm not super energy-bill conscious, even though it's me that pays the bills, but I certainly don't want to go overboard. I've yet to see what the power bill is going to look like for a whole month since I just started a week ago... I can make more concrete decisions on this after this month is over.

    Thanks yet again,
    Psychlone
     
  18. k1net1cs

    k1net1cs Ancient Guru

    Messages:
    3,783
    Likes Received:
    0
    GPU:
    Radeon HD 5650m (550/800)
    A bit of a correction regarding the GUI's method for counting the PPD of all slots.
    It should be the sum of the PPD from each slot, not the average.
    My thinking was a bit carried away by EOC stats.
     
  19. PantherX

    PantherX Folding@Home Team Leader Staff Member

    Messages:
    1,364
    Likes Received:
    0
    GPU:
    Gigabyte GTX 1080 Ti
    The GPU Priority is higher than SMP. In order to explain their interaction for F@H, I will use some easy-to-understand numbers which aren't real but represents the real situation.

    If you are folding on your Quad Core CPU with all the Cores (100% Usage), and the GPU wants to use 25% of the CPU, the SMP will use 75% of the CPU and the GPU will use 25% to give the overall CPU usage of 100% (assuming that nothing else is running). That is normal understanding which is correct. However, with F@H, if you start the SMP Client on a Quad core, it will create 4 threads, one for each CPU to make the folding faster. If the GPU uses one CPU, you would assume that the slow down is by 25% but that isn't the case. The slow down is significantly more because the 4 threads created needs to be synchronized after X seconds but since the GPU is using 1 CPU, the 4 threads are left to "fight" over the remaining 3 CPUs which makes the PPD drop significantly. Thus, we recommend that if the GPU uses a significant amount of CPU Usage, reduce the number of CPUs allocated to the SMP Client to the next lower even number, i.e. from 8 to 6.

    I hope that you have understood this better now.
     
  20. Psychlone

    Psychlone Ancient Guru

    Messages:
    3,688
    Likes Received:
    0
    GPU:
    Radeon HD5970 Engineering

    Understood. I'm going to try to set affinity of the SMP to 6 cores and try GPU0 and GPU1 clients again just as a test for the next few days.
    I'm pumping out a pretty decent PPD from my rig right now but always think I can do a little better.
    I'll also monitor my wattage usage since that's going to be a little bit of a problem... I'm the one paying the bills! ;)


    Psychlone
     

Share This Page