3D, 1080p and very high settings in ArmA III - GTX 660 Ti SLI or GTX 670 SLI?

Discussion in 'Videocards - NVIDIA GeForce' started by finguide, Sep 23, 2012.

  1. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,643
    Likes Received:
    99
    GPU:
    XFX RX 480 RS 4 GB
    Ivy is better than Sandy, but doesn't clock as high. An Ivy at 4400, 4500 Mhz is at least as good as a Sandy at 4800, if not higher.

    I highly doubt an Ivy Bridge can do 4700 without getting warm. The issue is the TIM used by Intel, it simply can't transfer the heat quick enough. Using LN2 etc is different though, the coldness is such that it overcomes the bad TIM somewhat.

    In terms of ARMA III, the actual topic of discussion, I wouldn't overlook AMD over Nvidia just because of Physx. ARMA III uses Physx 3, which should be better in terms of performance on CPU. Physx was deliberately made to run a bad as possible on CPU, to make the performance on CPU look better.

    The problem with GPU physx is that it take processing away from the graphics. If Physx was programmed properly for CPU, it would be pretty the case that if the GPU is taxed and CPU not, CPU Physx would be better, and vice versa. Physx 3 is only barely 'optimised', if you can call it that, for CPU, it doesn't make use of SSE3, SSSE3, SSE4.x, AVX, XOP, or anything else that may or may not be beneficial.
     
  2. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,954
    Likes Received:
    959
    GPU:
    GTX1080Ti
    You mean PhysX initially was designed to run on a PPU not the CPU. They didn't intentionally gimp performance on the CPU. The CPU is far too slow to run extensive hardware accelerated PhysX effects. You might as well have said the problem with 'X' Physics engine is it slows down blah blah blah. Yes indeed, adding effects slows things down regardless if it runs on the GPU or the CPU. I'm not sure why people who should know better, still perpetuate this silly myth.
     
  3. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,643
    Likes Received:
    99
    GPU:
    XFX RX 480 RS 4 GB
    It's not a silly myth, it's fact. Until Physx 3, Physx was purely x86 code. It had no instruction extensions whatsoever. If it were programmed with whatever instructions are useful out of any of the instruction sets it would perform very much significantly better. Nvidia simply didn't want Physx to run well on CPU because it massively closed the gap in terms of performance.

    I'm not arguing that the type of workload Physx is that it can't run faster on GPU, I'm just stating the fact about how it is for CPU. The simply fact that even you, yourself, are running a separate card just for Physx actually does somewhat show that running it on the GPU does actually hinder graphical performance as it takes GPU power away from doing graphics, otherwise having a specific card for Physx would make zero performance difference.

    Sorry, but anyone saying that Physx wasn't intentionally hindered for CPU is either ignorant of the facts, or are Nvidia fans making excuses. If you disagree with this, then you must try and argue that instruction sets, back from the old MMX days through to SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, future FMA3, FMA4, XOP, AVX, CVT, and future AVX2 provide no or only a marginal benefit. Because Physx until Physx 3 had NONE of these. Even Physx 3 only uses SSE2, and only uses it for some of its functions, its very, very far from optimised. Why would Nvidia bother if it meant it running better on CPU with AMD GPU's?
     
  4. Brendruis

    Brendruis Maha Guru

    Messages:
    1,242
    Likes Received:
    1
    GPU:
    Reference GTX 680 SLI
    No, x16 3.0 not possible with SLI unless you have a PLX chip or X79. I use Gen3 8x on both cards. If I had just one card I could use x16

    3570K is hotter.. it has 20.7% increase in transistor count on top of that they have used TIM instead of soldering the IHS which in the past we have proven has been detrimental on some of the older Intel value chips.. In additional to all of this it uses the new FinFET technology which is manufactured in such a way that it cannot dissipate heat as well.

    What do you mean you test with Heaven? Never heard of a program called Heaven for CPU testing.. there is Unigine Heaven 3.0 but if you're using that for CPU testing you are way off in left field.
     
    Last edited: Sep 29, 2012

  5. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,954
    Likes Received:
    959
    GPU:
    GTX1080Ti
    LOL, check yourself you just wrecked yourself hard. PhysX was x87 code. That alone should, if you had any sense at all, tell you where you went wrong. Then you go down hill from there. Good gawd, try your act on a less informed group.
     
  6. SLI-756

    SLI-756 Banned

    Messages:
    7,604
    Likes Received:
    0
    GPU:
    760 SLI 4gb 1215/ 6800
    Sorry i meant to say i test with cinebench 11.5.
     
  7. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,643
    Likes Received:
    99
    GPU:
    XFX RX 480 RS 4 GB
    4.4Ghz is a safe number for general use even in quite warm room (running 100 percent CPU). You could use 4.5 if not worried about temps when not stress testing. I think the answer about 4.7 Ghz at 1.312V comes down to not maxing the CPU. If you are worried about damaging the CPU due to stress testing, then you are just hiding from the thought in the back of your mind that you are running it too fast in the first place! The TIM used by Intel simply just can't transfer the heat quick enough from the cores to the IHS.

    I think they did this deliberately not only to save the 10 cents per CPU (even over using a better TIM), but to make Haswell look better in relation to it next year. My guess is that Haswell will have a proper TIM or be soldered.
     
  8. Spets

    Spets Ancient Guru

    Messages:
    3,077
    Likes Received:
    177
    GPU:
    RTX 3090
    Think you mean x87, and that's not true, SSE(2) support has been introduced since PhysX 2.8.4. You're also confusing Nvidia for Ageia's work which originally was created for their PPU's using x87 code. If Nvidia didn't want it to run well on the CPU then why would they spend so much money in R&D for PhysX 3 and introduce SSE support in 2.8?

    Keep in mind realtime calculations is not an easy task and these particle effects also need to be rendered, including the shadows, textures and/or lighting they produce, no matter what more effects = less performance, even if you have a dedicated card but at least having a dedicated card takes the pressure off your main GPU when it comes to the very difficult task of realtime physics simulations. The GPU will come out on top and that's because once you hit that particle count that CPU's can't handle, the GPU's architecture allows it to handle more before it hits that type of "bottleneck"
    CPU's just can't handle themselves in this field, it's why so many companies rely on GPU's now and only recently have been able to do large scale calculations in realtime.

    There's no reason to go to extremes and call anyone fans. It was hindered for CPU's but not from Nvidia, Ageia had no intention on having it run on CPU's. They were in the market with their PPU's and intention of being bought.
    SSE support was introduced back in PhysX 2.8.4 but isn't enough which is why PhysX 3 was developed from the ground up and considering the destruction module in 3.2 runs 200-300% better compared to 2.8.4 on the CPU I'd say it's nicely optimised. Nvidia needs it to run better on CPU's because not everything is done by the GPU, certain aspects rely solely on the CPU. It also makes the PhysX SDK a better choice for developers.
    As far as GPU vs CPU goes when the particle count climbs high then there's no comparison between the two, the GPU will end up as the clear performer.

    Back to the OP's question, I'd go with two 670's because ARMA3 looks like its large scale maps will benefit better with a higher memory bandwidth.
     
    Last edited: Sep 29, 2012
  9. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,643
    Likes Received:
    99
    GPU:
    XFX RX 480 RS 4 GB
    You are right, it was Physx 2.8.4, that said most games are pre Physx 2.8.4. I also agree Physx is better to run on GPU (I did say that, although not very well!). Nvidia are in the same position as Ageia though, it wasn't in their best interest for it to work well in pure CPU mode. I suspect the sole reason why Physx 3 was made better for CPU is to increase the interest in Physx and to thwart back the possible encroach of other physics process methods employing the use of OpenCL or Directcompute.
     
  10. Spets

    Spets Ancient Guru

    Messages:
    3,077
    Likes Received:
    177
    GPU:
    RTX 3090
    Yeah noticed you had it implied :p
    What ever the reason is though I'm glad they rebuilt it. 3.2 performs well and PhysX is a pretty advanced engine.
     

  11. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,954
    Likes Received:
    959
    GPU:
    GTX1080Ti
    Gee, thanks for the apology. :banana:
     
  12. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,111
    Likes Received:
    2
    GPU:
    2x 980Ti Gaming 1430/7296
    LMFAO, you show an idle screen and most likely cannot p-95 at 4.7ghz on air. nice proof there:3eyes:
     
  13. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,111
    Likes Received:
    2
    GPU:
    2x 980Ti Gaming 1430/7296
    I do not think the difference is as dramatic as you say.
    heres a 2600k, 3770k comparison and the 2600k is running 100mhz slower
    http://www.anandtech.com/bench/Product/551?vs=287
     
  14. Brendruis

    Brendruis Maha Guru

    Messages:
    1,242
    Likes Received:
    1
    GPU:
    Reference GTX 680 SLI
    Agreed, no way that is stable.. especially with that VID unless a very high amount of LLC is applied and then you've got some serious heat.. is just tough to get the heat out away from the chip with Ivy Bridge.. from experience :nerd:
     
  15. rflair

    rflair Don Commisso Staff Member

    Messages:
    4,142
    Likes Received:
    445
    GPU:
    5700XT
    I would take it easy with your ridicule.
     

  16. finguide

    finguide New Member

    Messages:
    5
    Likes Received:
    0
    GPU:
    Radeon HD 3400
    How powerful PSU should I get? I think that 550 W is just not enough for OCed i5-3570k and a pair of OCed GTX 670s and all the other stuff in teh box...
     
  17. Brendruis

    Brendruis Maha Guru

    Messages:
    1,242
    Likes Received:
    1
    GPU:
    Reference GTX 680 SLI
    Probably cutting it close or not enough in worst case.. what 550W is it?
     
  18. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,643
    Likes Received:
    99
    GPU:
    XFX RX 480 RS 4 GB
    Use this calculator:
    http://extreme.outervision.com/PSUEngine

    Remember to put in all things that you are likely to use. Also remember to put in the CPU overclock settings (say, 4400, 1.27V), and add say, 40W extra for both the video cards overclock. Also remember to put in all the fans, USB devices etc.

    The recommended figure given is a figure that you should at least have a power supply of that power. You should be aiming to have a bit of lee-way with the PSU, such that a requirement of 550W you have at least a 650W PSU. On the other hand, getting something too much (1200W) isn't a good thing either. Personally I'd get a 750W for the setup you listed, I like having the extra headroom.
     

Share This Page