Job Efficiency/Client Issue

Discussion in 'Folding@Home - Join Team Guru3D !' started by BLEH!, Jun 24, 2014.

  1. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Hello fellow folders

    I'm getting some odd results here, normally averaging 22K PPD on my 980X, but the latest job has dropped to 20K, is there any reason for this? Are some jobs more efficient than others?

    Also having an issue where the client doesn't automatically download the next job, usually when I get to work after leaving it folding overnight. Any ideas on a fix?

    Ta

    BLEH!
     
  2. PantherX

    PantherX Folding@Home Team Leader

    Messages:
    1,380
    Likes Received:
    0
    GPU:
    Nvidia GTX 1080 Ti
    There is a variation in the PPD if your system isn't a clone of the benchmark system (http://folding.stanford.edu/home/faq/faq-points/#ntoc9). I am not sure of exactly how much the variation would be (IIRC, the base point PPD would be +/- 10%) but it is not uncommon to have Project A get X PPD while Project B gets Y PPD and Project C gets between X and Y.

    Regarding the download issue, what client version are you using? Assuming that it is V7.4.4, do note that there is an issue if the WU is being downloaded and the connection resets, the client can't recover and the workaround is to reboot FAHClient (or system, whichever is easier). Hopefully, it will be fixed in the next release (https://fah.stanford.edu/projects/FAHClient/ticket/983).
     
  3. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Cheers for that. Point A) makes sense.

    Point B) Yes I am on 7.4.4, and on work's wifi, so if that resets... yeah. That'll be why it's happening.

    Cheers.
     
  4. iceagex

    iceagex Guest

    Messages:
    23
    Likes Received:
    0
    GPU:
    GTX 1070 G1 @ 2100 - 8775
    meh.....
     

  5. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Upping cache speed makes a HUGE difference on X58. :D
     
  6. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Does AVX make things a lot faster?
     
  7. PantherX

    PantherX Folding@Home Team Leader

    Messages:
    1,380
    Likes Received:
    0
    GPU:
    Nvidia GTX 1080 Ti
    Theoretically, yes. However, while GROMACS have included support for AVX, F@H has yet to include them. It is on their to-do list. Do note that while the mainstream FahCores (FahCore_a3, FahCore_a4, FahCore_a5) doesn't support AVX yet, there is an experimental FahCore called ocores which does indeed use AVX but there is a lot of optimizations left since scaling is very poor on CPUs (it is using OpenMM as opposed to GROMACS). CPU specific optimizations for ocores are planned to be done by the end of this year but so note that it may be subjected to change.
     
  8. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Ah, I see. Started folding on my 3930K at home as well as the 980X at work, that's generating twice the PPD, possibly down to higher memory bandwidth, clock speed (3.4 vs. 4.2) and higher IPC. Overall about 65K a day.
     
  9. PantherX

    PantherX Folding@Home Team Leader

    Messages:
    1,380
    Likes Received:
    0
    GPU:
    Nvidia GTX 1080 Ti
    That is AWESOME :D

    Last I checked, the RAM latency/bandwidth doesn't have a significant impact on normal WUs, only on bigadv WUs (they currently require a minimum of 24 CPUs). Pretty sure that the difference in CPU frequency and possibly newer architecture might account for the increased PPD.
     
  10. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,402
    Likes Received:
    421
    GPU:
    Sapphire Fury
    Could be something to do with cache as well, OCing the cache on the old Gulftown chip really helps with the PPD.
     

Share This Page