AMD 5 Series or Nvidia 'Fermi'

Discussion in 'Videocards - AMD Radeon' started by doh_nut, Dec 14, 2009.

  1. Passion Fruit

    Passion Fruit Guest

    Messages:
    6,017
    Likes Received:
    6
    GPU:
    Gigabyte RTX 3080
    I wouldn't even consider running a 5970 on a 550W PSU.

    His processor has a likely TDP of 95W. The 5970 a peak power draw of 294W. Add other pieces of hardware and you're likely reaching between 410 and 430W peak load.

    If you put a 550W PSU under that stress, it'll probably buckle under the load and take other things with it especially if it's not a good brand. The closer it gets to its limit, the less efficient it will be at providing clean power.
     
  2. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    what is direct x 10.1. and what is the difference between 10 and 10.1. almost nothing.

    we need physx, u ll see y.
     
  3. LinkDrive

    LinkDrive Ancient Guru

    Messages:
    4,673
    Likes Received:
    0
    GPU:
    Toaster
    Until PhysX becomes a standard, which it won't, game devs will continue to write their own physics based engines and they will only get better when DX11 starts to take off.

    A 550 watt PSU will struggle, at best, with a 5970 in there.
     
  4. Liranan

    Liranan Ancient Guru

    Messages:
    2,466
    Likes Received:
    0
    GPU:
    MSI 6870 1000/1200
    Physix is dead, MS just hammered the last nail in its coffin with DX11. It was never a success because it was proprietory and nVidia were overcharging for it.

    I hate manufacturers who lock things to their hard- or software. I don't care whether it's nVidia, AMD, MS, Intel Apple or any other third rate manufacturer. They greatly frustrate me because there's no need for it and their greed only hurts the customer.

    For example Apple locking their hardware to iTunes only and the stupidity with synchronising their music players and phones to one PC only. It's pure greed and it hurts their customers. I don't want this to turn into a pro or anti Apple discussion but they're just famous for it.

    Fortunately MS have made it easier for developers to create software that will take advantage of both nVIdia and AMD hardware and will allow the running of Havoc and Physix on both video cards. This is a great step forward for us all and it should be applauded.
     

  5. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    we need physics processing on gpu. but it doesn't mean that we need to use the physx ( it can be any thing). actually in my previous post i mean "we need physics processing on gpu". Tweak-2 said that CPUs can do it so we dont need physics processing on GPU. there are a lot of things cpu has to do( like artificial intelligence), so if the GPU does the physics processing, we shall get lot of extra processing power. and near future games will take the advantage.

    but why we r stack with the physx. dont u herd about the direct compute? but using that technology ( or API ) we can do physx processing on gpu. and by the lunching of dx11 it has already become an standard.
     
  6. Goliath182

    Goliath182 Master Guru

    Messages:
    342
    Likes Received:
    0
    GPU:
    Sapphire 5970 Redline
    More like we need to move to CGPU's. IMO PhysX is just coded so it runs terribly on CPU's. Remember the first PhysX demo with Ageia? When it was hacked the demo would run completely fine on any CPU even with the supposed "advanced" physics. Then a couple months later the demo looked the same, but if you used the CPU you would only get ~2 FPS.
     
  7. Liranan

    Liranan Ancient Guru

    Messages:
    2,466
    Likes Received:
    0
    GPU:
    MSI 6870 1000/1200
    I have no idea where sajibjoarder is getting his information from, but his fanboy tainted posts are irritating. He, obviously, doesn't know much about the evolution of hardware and, definite, doesn't remember the Physix cards.

    While I don't like DX, and would prefer developers to switch to OGL/OCL, the fact that Direct Compute is now available to all is a great thing.
     
    Last edited: Dec 19, 2009
  8. Sever

    Sever Ancient Guru

    Messages:
    4,825
    Likes Received:
    0
    GPU:
    Galaxy 3GB 660TI
    iirc, the main differences between dx10 and dx10.1 were that dx10.1 had compulsory 2x aa, compulsory anistropic filtering, and a bunch of other things. they make games look prettier, but can cause some slowdowns on low end systems. which means it wasnt much of an improvement since you could turn these options on in most games anyway.

    hopefully when amd start launching their fusion products (i believe they said it would launch in 2011?) things will start getting interesting, with no gpu-cpu latency due to them both being integrated on the same chip, and increased processing power, so no need for a dedicated physx card. but something tells me its gonna be one expensive chip, and we'll end up crossfiring it anyway. either that or it will be canned like intel's larabee.

    should be interesting to see how developers will take advantage of the extras that dx11 has to offer.
     
  9. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    i wont say i know everything. but brother i think u dont know about evolution of hardware. before geforce even before riva tnt nvidia tried to bypass dirextx or open gl( nv1 and nv2 ), they ware unsuccessful. if u wanna stay in the industry either u have to dominate more the 60% of the market or u have to follow the standards. physx is not a standard. ati got havok. but today or tomorrow there will be a standard for physx processing. with the gp-gpu ( what is supported in dx 11 ) is just an starting.


    by the way try to look at the wider view. dont be stacked in a 3 4 years term. since i 1996 i'm with the industry. i have seen a lot.
     
  10. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    in dx 10 cant u use 2X anti aliasing and 2x anisotropic filtering? so making one optional settings permanent doesn't make a big difference. the main difference is 10.1 supported hardwares have 32-bit floating-point operations ( this is not a big difference too)
     

  11. DementeD

    DementeD Master Guru

    Messages:
    851
    Likes Received:
    0
    GPU:
    AMD "6970" 1000/1500
    ii didnt read thread..but i say u go buy fermi..TODAY!!!
     
  12. Athlonite

    Athlonite Maha Guru

    Messages:
    1,358
    Likes Received:
    52
    GPU:
    Pulse RX5700 8GB
    dx 10.1 2xaa and 4x aniso without hit to fps is what was supposed to happen in reality game makers just upped the size and amount of complex textures which pretty much canceled out the no hit for 2xaa 4xaniso... as for physx nVidia did offer it to ati for free but ati being ati they turned it down if favour of havok (which they promissed would be GPU accelerated) but as yet hasn't happend....
    as for which team do I support well that'd be ATI seeing as how i just replaced my 2x HD2600XT's for a single hd5770 and i've never had any problems with ati drivers installing and running except for when they messed up with their atiogl.dll GL_EXT_texture_buffer_obj was set to use 0MB's of video ram and all textures were cached in system memory but that was back the 8.xx drivers and was simply resolved by using an earlier version from the 7.xx cats since the 9.03 driver there's not been a problem

    cards I've owned 1: s3 virge 4MB + voodoo2 12MB, 2: nVidia TNT2 32MB, 3: Nvidia GForce 4 MX440 64MB, 4: ATI Radeon HD2600pro 256MB, 5: 2x ATI Radeon HD2600XT 256MB (hmm i was going to put in a Trident 2MB as the 1st video card but it didn't do 3d so decided against it LOL)
     
    Last edited: Dec 19, 2009
  13. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    dont take it as a competition

    card i owned; voodoo rush, voodoo bansee, riva tnt2 ultra ( creative 3d bluster ), asus v6600 ( geforce 256), gforce 4 ti 4600 msi, ati radeon 9700( buit by ati ), asus redeon 9800( A9800XT ), his radeon x800gto ( ice cool ),
    xfx geforce 9600gso (g92) 768mb ddr2 ( the wrost card i ever had ), xfx geforce gtx 295x2 (quad sli).
     
  14. Koniakki

    Koniakki Guest

    Messages:
    2,843
    Likes Received:
    452
    GPU:
    ZOTAC GTX 1080Ti FE
    An immediate jump from that lousy 9600GSO to a 295 QUAD SLI setup..
    Damn, that must have been a huge jump in visual and gaming experience.. :)
     
    Last edited: Dec 19, 2009
  15. Goliath182

    Goliath182 Master Guru

    Messages:
    342
    Likes Received:
    0
    GPU:
    Sapphire 5970 Redline
    :funny:

    So your telling me there are no graphics card makers able to stay in the market? FYI no graphics company has 60% of the market, and Intel has the most at ~52%. ATI doesn't really have Havok, its more of a partnership. Nvidia bought PhysX, and so far hasn't been able to penetrate the market.

    Also please make your grammar show that, you write worse than my cousin who's in 3rd grade...

    I don't see any physics system becoming the standard. Some will continue to use PhysX others will use Havok, and probably the majority will use their own in house physics.

    Your not kidding lol.
     
    Last edited: Dec 19, 2009

  16. BlueDragon

    BlueDragon Member Guru

    Messages:
    160
    Likes Received:
    0
    GPU:
    Sapphire 5870
    I have now 2x5870 but before that i was all NVIDIA but I like to say yes the 9800GX2 was ok powerful if you could take the micro stuttering, then I got my self two 2xNVIDIA 260-216 great cards but I was going to get NVIDIA’s next GPU GTX380 but thought the 5870 looks really powerful and I can’t wait so i got that and yes one card is more powerful then my 2x260-216 in SLI which really is a GTX295 may be slightly less powerful not by much.

    Now playing games that’s where it counts and the 5870 is really smooth at 1080p where my NVIDIA cards was not so to me that’s a win win but the only game I can say which was 50% better on my 260s that was far cry 2 but other then that the 5870 fly’s but in DX11 a single 5870 with tessellation enabled in dirt 2 at 1080p drops frame rates a little under 50fps because of compute shader as well and half the load as be taken from the CPU so I have added a second 5870 for crossfire and now frame rates are way over 70fps so that great.

    So I think if NVIDIA can bring a single gtx380 and can do DX11 with tessellation enabled and keep a frame rate above 60fps constant at 1080p then I would say NVIDIA new card is better not some benchmarks with RE5 or devil may 4 etc basically any DX9 game and say the gtx380 gets 10 fps more then the 5870 when we are in the hundreds of frame a second now so I would not give a monkeys so if they can beat ATI at DX11 that is where it will count.
     
  17. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    sorry for my poor english.

    bro main thing is we need an standard. physx havok or anything else is not a standard. like u see direct x or opengl are standards for graphics cards. so we need an api like that for physics processing, else different manufacturer will bring out their own standards. and we will not be benefited.

    and i know that none of the manufacturer ever own 60 % of the market. thats why they flow the standard directx and open gl.
     
    Last edited: Dec 20, 2009
  18. sajibjoarder

    sajibjoarder Master Guru

    Messages:
    607
    Likes Received:
    1
    GPU:
    gtx 1080 FE
    i was busy with my study and research for about 2 years. so i didn't buy any update for my pc. then i bought the 9600gso. because i was short in cash that time. after getting some money from my dad i bought the new rig.
     
  19. Koniakki

    Koniakki Guest

    Messages:
    2,843
    Likes Received:
    452
    GPU:
    ZOTAC GTX 1080Ti FE
    Yeah, mostly same here too.. Saving money at the moment and studying..
    I was gonna build an i7 system with quad 295 or 4870X2 but then ATI 5000 and NVIDIAg GT300 were announced so I waited..
    But I decided to wait a bit more for AMDs and INTELs 6core(or more) cpus.. Around the 3rd Quarter 2010 I will built the best-bang-for-buck system I can..
    But the current i7 quads are not crossed out.. The prices will drop down pretty much until then, and the 920/950 @ 4GHz+ are still the best choices(the 920)..
    Probably the CPU will be by Intel, but I have high hopes for AMD too.. Whichever offers the best performance at reasonable prices..
    And please dont say anything about the 6core build, because I know we have a long way from using them,
    but I wont be ugrading that Rig for some years to come and the cores usage optimizations are getting better and better..
    As for which card to get, well like the Cpus, I cant tell from now..
    Technology is advancing so fast, that even guessing for just 7-8 months from now, it will still be a long shot.. :p
     
    Last edited: Dec 20, 2009
  20. LinkDrive

    LinkDrive Ancient Guru

    Messages:
    4,673
    Likes Received:
    0
    GPU:
    Toaster
    Haha, I jumped from an Athlon x2 4200+ and x1600xt to this quad core and two HD4870s in CrossFire. The words "massive high" comes to mind for people like me and sajibjoarder :p
     

Share This Page