Guru3D.com Forums

Go Back   Guru3D.com Forums > General Chat > Folding@Home - Join Team Guru3D !
Folding@Home - Join Team Guru3D ! Please join the Folding@Home Research Program, join team Guru3D. Let's rank the charts high and fight disease !



Reply
 
Thread Tools Display Modes
Old
  (#101)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 12-23-2010, 19:02 | posts: 1,716 | Location: uk

your going to be able to put up some serious number soon
   
Reply With Quote
 
Old
  (#102)
Ghost15
Member Guru
 
Videocard: EVGA 465 SC 1GB
Processor: Core i7 980x @ 4.15
Mainboard: EVGA X58 3x SLI
Memory: DDR3 6GB @ 2004 mhz
Soundcard: Creative X-Fi Fatality
PSU: Corsair AX-850
Default 12-23-2010, 20:00 | posts: 77 | Location: USA,RI

Wow nice, I better pass you while I have the chance! Although, I don't think I can hold you off with that setup now


   
Reply With Quote
Old
  (#103)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-29-2010, 20:59 | posts: 159 | Location: NH

Been trying to optimize folding performance with the two 580s, single 480, and SMP2 client... actually drove into the office on my day off to try to optimize things and will likely be doing the same tomorrow. I'm having trouble keeping the TPF on the SMP2 low enough to be able to meet bigadv project deadlines. I had it down to 3 hrs under the deadline last evening but that is too close for comfort (I do want to play an occaissional game). For now I have focused in on the following settings. I now need to wait a couple of hours before I can ascertain the new TPF on the SMP2 client:
  • 580 SLI disabled (per nVidia Control Panel)
  • 480 dedicated to PhysX (setting in nVidia Control Panel)
  • SMP2 Client folding on 6 threads
  • SMP2 Client priority set to "low"
I've been experimenting with the "idle" priority in the SMP2 client's advanced settings and might need to use that setting in conjunction with the client using 7 threads and the exclusion of one of the GPU clients.

It seems the advanced research I had posted in regards to thread count versus TPF is invalid since the addition of GPUs incurrs additional processor overhead beyond the minimum of what "should" be needed to keep the percent usage of each GPU high.

I can see where this is going now... I'm going to need to start saving for a six-core 980x. I'm hoping that prices on Core i7 processors will drop over the next couple of months with the introduction of Sandy Bridge. That is probably wishfull thinking with the 980x (or 970) though.



Last edited by J_J_B; 12-29-2010 at 21:05.
   
Reply With Quote
Old
  (#104)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 12-29-2010, 23:02 | posts: 1,716 | Location: uk

what tpf are you getting with 6 threads jjb?
   
Reply With Quote
 
Old
  (#105)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-30-2010, 02:04 | posts: 159 | Location: NH

Seeing a TPF of 54 minutes with the 3 GPUs running. That only leaves about 6 hours to the deadline. The predicted credit (HFM.NET) is 57,226 points. Before, when running only 1 GPU client and the SMP2 on 7 threads, the TPF was 34 minutes and the predicted credit about 72,000 points.

Time for another test... overnight I'm going to try SMP2 on 7 threads with SMP2 priority set to "idle", and eliminate one GPU (the GTX-480). Will see what the TPF is in the morning.

The way I see it, it's about trying to improve the TPF of the SMP2 client so I can occaisionally shut it down to do other things, while not missing the deadline in the end, plus maximizing point production during the time that remains. I may be able to make up for points lost due to shutdown of a GPU client by taking a lesser point hit in the SMP2 client.

What I have been neglecting to mention is that there also needs to be enough CPU resources left over such that the virtual machine I run for business purposes must not cause such a definiciency in CPU power that the GPU clients stop folding or are significantly impaired. A possible problem with setting the SMP2 client priority to "idle" is that IT will become what is throttled down, instead of the GPU clients, when the virtual machine is running.


   
Reply With Quote
Old
  (#106)
PantherX
Folding@Home Team Leader
 
Videocard: Dual ASUS Turbo GTX 1070
Processor: Core i7-6700K @ 4.6 {1.4}
Mainboard: ROG Maximus VIII Formula
Memory: 32 GiB @ DDR4-2,700 MHz
Soundcard: ASUS Maximus VIII Formula
PSU: ST75F-GS 850Watts
Default 12-30-2010, 04:59 | posts: 1,357

IIRC, with a TPF of ~54 minutes, the PPD would be ~16K but if you fold normal SMP2 WUs, my guess is that your PPD would be ~15K. The difference is of only 1K so I would suggest that you can fallback to normal WUs since thy give you roughly the same amount of PPD and you don't have to worry about the deadlines too much. My guess is that the highest PPD (without causing a headache for you) can be 3 GPUs with 1 normal SMP2 (-smp 6) and you can work easily on your system
   
Reply With Quote
Old
  (#107)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-30-2010, 15:26 | posts: 159 | Location: NH

That sounds like a plan PatherX. The overnight test yielded a TPF of 56 minutes - even worse than before. I've done quite a bit of testing now and do not believe it will be possible to fold on more than one GPU while completing bigadv projects within the 4-day deadline, in a reliably fashion, on this 4-core CPU - even if it is running at 4.1 GHz. I suspect a 6-core CPU is a requirement for bigadv when folding on more than one GPU.

For the time being I have halted all GPU clients, set the SMP2 priority back to "low", increased the SMP2 threads to 8, and added the "oneunit" flag. This should allow the current WU to complete well within the deadline. CPU usage is at 97% and the TPF is currently 44 minutes. Hmm... doesn't make sense, does it? I think something was wrong. I have rebooted the computer and will check the TPF again in a little bit. I'm now wondering if something was causing the TPF to cap out.

<Addition> TPF was up to 41min after reboot. I read about the SMP2 beta client not scaling well with even numbered SMP flags so I'm trying again on 7 threads now, with no GPU clients running. If the TPF is not on the order of 34min, as it was before the addition of the GTX-580 cards, then my only conclusion is that something about my system has changed that is impacting the TPF in a very negative way. Apart from installing the GTX-580 driver, Metro 2033 via Steam, and the Steam client itself there have been no other changes.

To be clear and ensure I understand what I should be doing to back away from bigadv work units... I will need to replace the "bigadv" flag with the "advmethods" flag. Is this correct?



Last edited by J_J_B; 12-30-2010 at 16:38.
   
Reply With Quote
Old
  (#108)
PantherX
Folding@Home Team Leader
 
Videocard: Dual ASUS Turbo GTX 1070
Processor: Core i7-6700K @ 4.6 {1.4}
Mainboard: ROG Maximus VIII Formula
Memory: 32 GiB @ DDR4-2,700 MHz
Soundcard: ASUS Maximus VIII Formula
PSU: ST75F-GS 850Watts
Default 12-30-2010, 16:40 | posts: 1,357

Yon can add -advmethods flag which gives you access to late-stage beta WUs. If you run without the -advmethods flag, you will be assigned normal WUs.

Instead of -smp 8 or -smp have you tried -smp 7 so you can finish the current WU? The reason is that any CPU cycles "stolen" from FahCore_a3 will cause a non-linear slow down in the WU processing.
   
Reply With Quote
Old
  (#109)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-30-2010, 18:49 | posts: 159 | Location: NH

Yes (-SMP 7)... just got to the office and the TPF was about 45 minutes. Something in my system has changed software/driver wise that is no longer permitting even a half-way decent TPF on this quad-core... and this was with no GPUs running. So much for my comment regarding a 6-core CPU perhaps being necessary. Kind of wierd... I was even contemplating going back to running only one GPU with the SMP2 client at SMP 7, as before, because (let's face it) the points/benefit-versus-wattage ratio is killed by a couple of high-end GPUs, but tests are showing there is no going back. If I were to blame something I might say the GTX-580 driver is rersponsible, but I say that without enough proof to back anything up at this point. I'm certainly not going to uninstall the GTX-580 drivers. I'll just have to wait for new drivers to come out, upgrade, and try testing again later on.

With all the stops, restarts, and experimentation there is no longer any chance of completing the current bigadv work unit so I have deleted it. I'll try running normal work units for now, unless you think there is any point advantage to folding those late-stage beta work units.

Thanks for the feedback!


   
Reply With Quote
Old
  (#110)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 12-30-2010, 20:19 | posts: 1,716 | Location: uk

hey jj i had to do a lot of research into what gpu3 was doing on my cpu, with process lasso you can set gpu3 to one core/thread but with two gpu3s they will both go to that core/thread now with 3 gpu3s unless you can find a way to force them to use different threads/core windows mite do it, and it mite pic a core/thread that the smp is using

now the other big thing i found was that when running two gpu3 clients it would some time use more than one core even though the core was not at 1005 usage confused yet lol?

well what i did was set gpu3 to core/thread 12 but like i just said up there it was also useing core/thread 7 but core/thread11 was not being 1005 used so i set all my gpu3 client to use core/thread 7 and all other core/thread to smp11 so that meant that smp was using 0,1,2,3,4,5,6,8,9,10,11,12 and that the two gpu3 clients were using core/thread 7

so what im am saying is that running 3 gpu3 client sound very difficult and one a quad core with 8 thread will take some planning BUT is doable

hope this help
and i hope you get what i am saying
   
Reply With Quote
Old
  (#111)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-31-2010, 00:30 | posts: 159 | Location: NH

Thanks Iancook,

I currently have the SMP2 client folding a normal WU with -SMP 6 and I'm seeing a production rate of 6870 PPD (TPF = 10min 49sec). I have the -oneunit flag set so I can try running a late-stage beta WU via the advmethods flag tomorrow to see what PPD I can acquire with those work units. The SMP2 client with -SMP 6 does seem to be the ticket at the moment since I can run my virtual machine and keep the three GPU3 clients nearly fully utilitizated. I've found that SLI on the two GTX-580s must be disabled and the GTX-480 must be "dedicated" to PhysX in the nVidia console in order to assure maximum usage of all three GPUs for folding. These actions should not be necessary because the "CUDA - GPUs" option in the nVidia console was meant to avoid this, as well as the previous need to connect a monitor to each GPU. Well, at least that last need is no longer required.

Regarding your comments, I think you are referencing the following utility. Is this correct?
http://download.cnet.com/Process-Las...-10823362.html

If I read you correctly, you used Process Lasso to prioritize the usage of core/thread 7 by your two GPU3 clients after discovering that they were predominantly using core/thread 11 plus a little bit of 7 (even though 11 was not fully utilitzed). You did this because Windows seemed to want to use core/thread 7 for the GPU3 clients... so why not force it. You then had Process Lasso prioritize the SMP2 client (with -SMP 11 flag) to use the other 11 cores/threads.

Did I get that right? Is it Process Lasso that is "reigning" in the clients to use specific cores/threads and not some command-line options (flags) on the clients?



Last edited by J_J_B; 12-31-2010 at 00:34.
   
Reply With Quote
Old
  (#112)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 12-31-2010, 01:34 | posts: 1,716 | Location: uk

yep you got it in one, it just funny behaviour ive been seeing when running gpu3 x 2 clients
it was slowing my smp/bigadv down my cpu is normal utilizing 95-98% total i think i had to go 4ghz plus to get the gpu utilizing 99% any lower like 3.8 and one of the two would only be working 88%

yep Process Lasso only the normal flags like smp11 and so on
   
Reply With Quote
Old
  (#113)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-31-2010, 01:52 | posts: 159 | Location: NH

Cool - thanks for the Process Lasso tips! I'll look into it next week when I have some time.

Of course, I still have another problem in the works since my SMP2 production rate has gone way down with the install of these GTX-580 cards... even with no GPU3 clients running, a fresh reboot, and all extraneous services terminated.


   
Reply With Quote
Old
  (#114)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-31-2010, 16:46 | posts: 159 | Location: NH

My SMP2 rate went from about 6700 to 9700 PPD in switching from normal to late-stage beta work units (-advmethods flag) so I am sticking with those for now. This is with -SMP 6. The three GPU3 clients are running at 100% and only drop temporarily in usage when my virtual machine boots up. GPU usage returns to 100% once the VM has finished loading and has stabilized. So, I'm happy for now and system will be folding without interruption through the weekend.

Total PPD is about 62500 with current work units.

I am still baffled by the drop in SMP2 performance as a result of merely installing GTX-580 drivers. I should not have had to take a 17000 point cut on my SMP2 performance (bigadv) without any GPU3 clients running. It will be interesting to see if the new drivers that nVidia is said to be releasing in early January will improve the situation. I currently have two separate sets of drivers installed... 260.xx for the GTX-480 and 263.xx for the GTX-580s. I believe the upcoming driver release will support both cards so perhaps things will get better.

The only other reason I can think of for the slow down, crazy or not, is some slow-down related to the populating of two more PCIe slots.

Currently running with the case's side cover off so CPU core temperatures top out at 74 C rather than 80 C. The GPU cooling loop is heating things up inside the case. I think I need to add a fan or two to the side panel for increased air flow, something the 800D case is not known for.

LOL - If I asked you to guess what is causing the most noise in this system you would likely guess incorrectly. It's the BFG EX1200 PSU. I'm drawing about 950 watts from that PSU at the moment and it only takes a few minutes for its fan to whir up high.



Last edited by J_J_B; 12-31-2010 at 18:38.
   
Reply With Quote
Old
  (#115)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 12-31-2010, 17:59 | posts: 1,716 | Location: uk

here are the new/Leaked driver 263.14
http://en.expreview.com/2010/12/30/n...ked/13501.html

is far as folding goes even pcie x4 wont make a differents to folding so i don't think it would be that but remmeber bigadv should be run on a 8 core/thread cpu but with an overclock you can use smp7, smp6 mite just be to little hourse power at what ever clock speed you give it

sound like your pumping out somee ppd now
   
Reply With Quote
Old
  (#116)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 12-31-2010, 19:53 | posts: 159 | Location: NH

Wow... thanks for the find Iancook! I've downloaded the driver and it does indeed unify all nVidia cards, including the 580 and 480. I'll rip out the old drivers on Monday, install this new one, and then monitor the SMP2 TPF for improvements. If the TPF improves then I'll try pulling down a bigadv work unit again.

I'm currently seeing a TPF of 4min 17sec on P6077.


   
Reply With Quote
Old
  (#117)
PantherX
Folding@Home Team Leader
 
Videocard: Dual ASUS Turbo GTX 1070
Processor: Core i7-6700K @ 4.6 {1.4}
Mainboard: ROG Maximus VIII Formula
Memory: 32 GiB @ DDR4-2,700 MHz
Soundcard: ASUS Maximus VIII Formula
PSU: ST75F-GS 850Watts
Default 12-31-2010, 23:12 | posts: 1,357

Strange, on my setup (-smp 7) with 1 GTX 260/216 for P6077:
Min. Time / Frame : 00:03:21 - 15,216.22 PPD
Avg. Time / Frame : 00:03:26 - 14,665.61 PPD

If you have some free time, I suggest that you look a little deeper into what is causing this slow-down.
   
Reply With Quote
Old
  (#118)
ariskar
Master Guru
 
Videocard: Palit GTX 980 Super Jetst
Processor: i7 5960x
Mainboard: Asus Rampage 5 Extreme
Memory: 16GB DDR4 2400Mhz CL12
Soundcard: SB Z - HK Soundsticks 3
PSU: EVGA Supernova P2 1000W
Default 01-01-2011, 14:20 | posts: 193 | Location: London, UK

480 dedicated to PhysX

that's so much of an overkill huh?


.
   
Reply With Quote
Old
  (#119)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 01-02-2011, 23:57 | posts: 159 | Location: NH

Definately, but I decided to leave it in the system for folding.


   
Reply With Quote
Old
  (#120)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 01-03-2011, 01:10 | posts: 159 | Location: NH

PantherX:

That is what I mean. It's as though the install of the two 580s trimmed my TPF values on the SMP2 client - even with the GPU3 clients are taken out of the picture and shut down. This unexpected drop in TPF resulted in my having to exclude bigadv projects for the time being.

I noticed that even though the 480 and 580 use different official drivers, it appears as though all three cards are using the 580 drivers at the moment. Perhaps this might be a cause but it still doesn't sit right with me blaming video drivers for slowing down something that isn't related to video in any way (the SMP2 client). The install of the two 580 cards, their driver, the install of the Steam client, and the install of Metro 2033 are the only changes I made to the system that could have caused the slowdown in the TPF.

What I'll try doing tomorrow is I'll uninstall both sets of drivers and then re-install the driver for the 480 only. I'll then use the 480 only (leave the 580s disabled) and check the SMP2 TPF again to see if has increased. This will be an indicator that the 580 driver, or perhaps it use by the 480 when install the 580 driver over the 480 driver, is the cause.

I'll report back when I have some results.



Last edited by J_J_B; 01-03-2011 at 01:24.
   
Reply With Quote
Old
  (#121)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 01-05-2011, 14:26 | posts: 159 | Location: NH

Well... lots of testing here but never did find the cause of the general slowdown in the SMP2 client. I'm currently running the 263.14 drivers on the GPUs but there is no change.

Process Lasso has improved things a quite a bit (nice find Iancook!). I have set the default CPU affinity for "fahcore_a3.exe" (the SMP2 client) to use "cores" 0,1,2,3,4,5 and the affinity for "fahcore_15.exe" (the GPU3 client) to use cores 6 and 7. This makes things nice and neat - instantly saw cores 0 through 5 go to 100% usage. The GPU3 clients tend to eat about 60% of core 6 and 75% of core 7. I also run a virtual machine and found that I had to reconfigure the SMP2 client to only use 95% of requested CPU resources in order maintain responsiveness in the VM. This is with the SMP2 priority set to "low".

I've narrowed in on a maximum set of overclocks for my GTX-580s. I have the cores at 925 MHz and the memory at 2150 MHz. Could probably get the memory higher but I'm not going to bother. The second of the two cards is a little less stable than the other. To achive stablility the voltage has been increased from EVGA's standard for the 580 HC of 1.088 to 1.125. The allowable max that EVGA has set is 1.150 volts. I found that 1.150 volts was not enough to make the second card stable at a core speed of 950 Mhz so pulled both cores back to 925 and I'll keep the extra voltage in reserve in case I start erroring out work units when Stanford gives us some high atom-count WUs again.

I noticed that the PPD of the 580s was not increasing as it should as I overclocked. It turns out that I need to run GPU-Z with the "/GTX580OCP" flag in order to turn off nVidia's power management functions, else the PDD did not go up in a sensible fashion... I actually saw a very slight decrease during some tests.

So here is the issue with the 580 overclocks... each 580 is pulling about 330 watts at the above clocks. When we do get issued one of those high atom-count work units from Stanford I am predicting that the wattage pulled by each card will go up to as high as 450 watts. Those are furmark-like extreme burning numbers and I don't want to kill these cards so I'm going to pull the cards back to 900 MHz on the cores and play with that for a while. It is conceivable that I'll need to return the cards to default values (850 MHz on the cores) when such a WU comes down though.

1000 watts are currently being pulled by the main PSU. Only about 80 watts of that total is going to the GTX-480, which is on its own auxillary PSU. I need to pick up a second power meter from the home improvement store so I can monitor power draw by both PSUs.

It seems that the GPU cooling loop has adequate ability in that the temperatures of the 3 GPUs are maxing out at 51 C under their current Stanford work-unit loads. I designed the loop upon the recommendation of other forum members of one 120mm type radiator per GPU, and then added a fourth for good measure since that rule was valid back in the GTX-295 days and I figured Fermi would draw more power. In this case there are two 120 type rads and one 240 rad for a combined surface area of 480mm. The problem now is a lot of the GPU heat passes through the case, which decreases the cooling efficiency of the CPU/MB cooling radiator, which exhausts air out through the top of the case. I'm currently running with the side cover off, which nets me an extra 10 degrees of cooling on the CPU and probably 5 degrees on the GPUs. I'm certain now that I'm going to cut holes for two 120-mm fans in the top-right area of the side panel's plex-glass. I have an unused channel on my fan controller that can be used to power those additional fans. If two fans don't help, I may need to add a 3rd. The Obsidian 800D is a great case for modders, but it's stock airflow is quite poor.



Last edited by J_J_B; 01-05-2011 at 16:12.
   
Reply With Quote
Old
  (#122)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 01-05-2011, 17:50 | posts: 1,716 | Location: uk

sound like your getting to the bottem of the problems anyway and the credit need to go to panther he suggested that to me

did not think of that, other folder have been finding that with the 580 overcurrent porection GTX580OCP kicking in, wow 330w for one 580 and could go up to 450w
   
Reply With Quote
Old
  (#123)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 01-05-2011, 22:09 | posts: 159 | Location: NH

Yup - folding really stresses our cards and with all the processing capability that a 580 has, a higher binned GPU capable of remaining stable at high overclocks could damage itself with overcurrent protection disabled! I think my PSU would shut down before the cards were run like that for any significant length of time but I'd rather not find out.

The 450 watts is an estimate but I think it is a reasonable one. I know I can hit that right now with a Furmark Extreme-Burning test, which is why I quickly stopped Furmark'ing a few weeks ago when I realized just how extreme a test it really is.

I know from Stanford's last high atom-count WU that my GTX-480 pulls 350 watts while folding those monsters.


   
Reply With Quote
Old
  (#124)
J_J_B
Master Guru
 
J_J_B's Avatar
 
Videocard: EVGA: 2xSLI 580HC, 480HC
Processor: Intel Core i7 965 @ 4.1
Mainboard: EVGA X58 Classifed 3-way
Memory: 12GB Corsair DDR3-1600
Soundcard: (built-in)
PSU: BFG EX1200, VisionTek 450
Default 01-06-2011, 15:52 | posts: 159 | Location: NH

Just noticed that my SMP2 TPF was down to 3min 51sec while folding a P6076 work unit. This is with -SMP 6, requested CPU usage of 95%, and priority of "low". This is with the three GPU3 clients folding and the virtual machine running.

Perhaps the low TPF values I've been experiencing on the SMP2 client were simply related to the particular work units that were being pulled down??

In any case, the SMP2 PPD is reading 12385. That seems more on-par with what PantherX was reporting.


   
Reply With Quote
Old
  (#125)
iancook221188
Maha Guru
 
Videocard: GTX 670 SLI / GTX 460 SLI
Processor: 2600k4.5 / i7 970 4.4 WC
Mainboard: X68 UD4 / X58A UD5
Memory: 16GB / 24GB
Soundcard:
PSU: TX850 / AX850
Default 01-06-2011, 23:00 | posts: 1,716 | Location: uk

yer look out for porject 6701/2 they are very slow and low ppd on the smp the bigadv 2684 is also a slow one all the other should get about the same
   
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



Powered by vBulletin®
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
vBulletin Skin developed by: vBStyles.com
Copyright (c) 1995-2014, All Rights Reserved. The Guru of 3D, the Hardware Guru, and 3D Guru are trademarks owned by Hilbert Hagedoorn.