Well... lots of testing here but never did find the cause of the general slowdown in the SMP2 client. I'm currently running the 263.14 drivers on the GPUs but there is no change. Process Lasso has improved things a quite a bit (nice find Iancook!). I have set the default CPU affinity for "fahcore_a3.exe" (the SMP2 client) to use "cores" 0,1,2,3,4,5 and the affinity for "fahcore_15.exe" (the GPU3 client) to use cores 6 and 7. This makes things nice and neat - instantly saw cores 0 through 5 go to 100% usage. The GPU3 clients tend to eat about 60% of core 6 and 75% of core 7. I also run a virtual machine and found that I had to reconfigure the SMP2 client to only use 95% of requested CPU resources in order maintain responsiveness in the VM. This is with the SMP2 priority set to "low". I've narrowed in on a maximum set of overclocks for my GTX-580s. I have the cores at 925 MHz and the memory at 2150 MHz. Could probably get the memory higher but I'm not going to bother. The second of the two cards is a little less stable than the other. To achive stablility the voltage has been increased from EVGA's standard for the 580 HC of 1.088 to 1.125. The allowable max that EVGA has set is 1.150 volts. I found that 1.150 volts was not enough to make the second card stable at a core speed of 950 Mhz so pulled both cores back to 925 and I'll keep the extra voltage in reserve in case I start erroring out work units when Stanford gives us some high atom-count WUs again. I noticed that the PPD of the 580s was not increasing as it should as I overclocked. It turns out that I need to run GPU-Z with the "/GTX580OCP" flag in order to turn off nVidia's power management functions, else the PDD did not go up in a sensible fashion... I actually saw a very slight decrease during some tests. So here is the issue with the 580 overclocks... each 580 is pulling about 330 watts at the above clocks. When we do get issued one of those high atom-count work units from Stanford I am predicting that the wattage pulled by each card will go up to as high as 450 watts. Those are furmark-like extreme burning numbers and I don't want to kill these cards so I'm going to pull the cards back to 900 MHz on the cores and play with that for a while. It is conceivable that I'll need to return the cards to default values (850 MHz on the cores) when such a WU comes down though. 1000 watts are currently being pulled by the main PSU. Only about 80 watts of that total is going to the GTX-480, which is on its own auxillary PSU. I need to pick up a second power meter from the home improvement store so I can monitor power draw by both PSUs. It seems that the GPU cooling loop has adequate ability in that the temperatures of the 3 GPUs are maxing out at 51 C under their current Stanford work-unit loads. I designed the loop upon the recommendation of other forum members of one 120mm type radiator per GPU, and then added a fourth for good measure since that rule was valid back in the GTX-295 days and I figured Fermi would draw more power. In this case there are two 120 type rads and one 240 rad for a combined surface area of 480mm. The problem now is a lot of the GPU heat passes through the case, which decreases the cooling efficiency of the CPU/MB cooling radiator, which exhausts air out through the top of the case. I'm currently running with the side cover off, which nets me an extra 10 degrees of cooling on the CPU and probably 5 degrees on the GPUs. I'm certain now that I'm going to cut holes for two 120-mm fans in the top-right area of the side panel's plex-glass. I have an unused channel on my fan controller that can be used to power those additional fans. If two fans don't help, I may need to add a 3rd. The Obsidian 800D is a great case for modders, but it's stock airflow is quite poor.