AMD Talks GPU Gaming Physics

Discussion in 'Frontpage news' started by SlazsH, Feb 26, 2011.

  1. TheHunter

    TheHunter Banned

    Messages:
    13,404
    Likes Received:
    1
    GPU:
    MSi N570GTX TFIII [OC|PE]
    Bullet is gpu/cpu based (openCL, x86) just like physx with cuda and cpu x87 ancient code.


    Havok is intel's thing now, we dont know whats going on there.. imo they bought it for LarraBee gpgpu but it turned out to be a distant dream atm..
     
    Last edited: Feb 27, 2011
  2. Espair

    Espair Active Member

    Messages:
    83
    Likes Received:
    0
    GPU:
    HD4870 1gb@2048x1152
    Ah ok, thanks. In that case, I hope this thing takes off.
     
  3. davetheshrew

    davetheshrew Guest

    Messages:
    4,089
    Likes Received:
    1
    GPU:
    some green some red
    aye man forget about havok for being gpu hardware driven, I can see a very large future in bullet moreso than physx, has anyone here seen this x87 code used in films? nah didnt think so, there is a lot of money there for bullet, Physx only has nv's dollars and whoever backs them.

    EDIT: Ima thinkin that Crysis2 will be the last big spend by NV for physx, anyone with me?, heres my reasoning.

    TWIMTBP titles so far work just as well on amd hardware these days as amd finally has the grunt so TWIMTBP is moot as far as advertising in the pc gaming community.

    NV hasnt got much more push as far as CUDA is cocerned as the software market is driving more towards an open standard for very good reason, e.g why would lets say photoshop go with nv when there are lots of amd users that would be pissed off if they ran slower due to blocks/restrictions seen many times in TWIMTBP games? news travels fast among these user's.

    Physx is ..well physx, any other game developer can make a code worthy of smashing physx's efforts so whats the point again? may as well be open source right since youd get more custom? We are in hard times for cash, nv cant do this anymore and software/hardware developers will take note when they see opencl adopters getting a heap more cash than cuda.

    How am I doing here?
     
    Last edited: Feb 27, 2011
  4. dchalf10

    dchalf10 Banned

    Messages:
    4,032
    Likes Received:
    0
    GPU:
    GTX670 1293/6800
    lol

    "# Toy_Story_3 published by Disney Interactive Studios.[2]
    # Grand Theft Auto IV and Red Dead Redemption by Rockstar Games.[3]
    # Trials HD by RedLynx.[4]
    # Free Realms by Sony Online Entertainment.[5]
    # HotWheels: Battle Force 5.[6]
    # Gravitronix.[7]
    # Madagascar Kartz published by Activision.[8]
    # Regnum Online by ngd Studios. An MMORPG which in its latest major update its physics engine was replaced by Bullet.
    # 3D Mark 2011 by Futuremark.[9]
    # Blood_Drive_(video_game) "

    Yep, Bullet Physics, pushing the boundariesof what physics in gaming is.


    /s


    Let me rephrase; until I see bullet physics that is BETTER than physx, it is just vapourware.

    As far as I'm concerned it is just a buzz word because nothing in those games is even better than CPU havok physics let alone physx.
     

  5. UnrealGaming

    UnrealGaming Ancient Guru

    Messages:
    3,454
    Likes Received:
    495
    GPU:
    -
    No, because Crysis 2 uses "In-house" physics engine. Not PhysX . :p
     
  6. davetheshrew

    davetheshrew Guest

    Messages:
    4,089
    Likes Received:
    1
    GPU:
    some green some red
    I know that, read my edit. Crytek has had 2 mil from nv for *insert cuda/physx here*, remember jc2 with cuda water? or is it physx water?, whats in a name huh
     
    Last edited: Feb 27, 2011
  7. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Lol you need read the article again and see what the ex director of the developmment team of PhysX and CUDA say about this "rumored offer"... It's not cause a guy at Nvidia say this to a jurnalist, there's any other thing to read in it, as a marketing purpose...
    Even him say this was or a bull****, or absolutely nothing serious.

    For a simple reason. PhysX is coded for CUDA, and the architecture of Nvidia is designed around CUDA... this will have been nearly impossible for AMD to port PhysX to their processors architecture, cause they don't use thoses instructions. + the performance will have be as bad as PhysX by the CPU .

    He used a simple example. And compare it with x86 instructions, how will you make run well a software who use specific SSE instructions, if your "hardware" don't support thoses instructions ? And AMD can't licence the instructions used in CUDA. and don't even can copy them.. or they will need completely redesign their gpu design for it, and then Pay a incredible high licence to Nvidia ... Do you really thing AMD engineer want to depend of Nvidia for developp their GPU?


    PhysX = Cuda .. saying Physx can be offered and used like that on AMD gpu's is pure bull****. you can't if you don't get CUDA at same time.

    + AMD will have never any control on it. It will have been a real hell for AMD, cause Nvidia develop the gpu's cuda and software cuda in cross developping.
    Hardware is linked to CUDA, like CUDA is linked to the hardware. Something AMD will not get the ability to do. so on final you will have a new standard on games engines, who is controlled by your unique competitor. Who will be able to accelerate the performance of it, including by new instructions set, instructions you can't get the hand on. It will have been better for AMD to delete the gpu's department, cause they will be dead and disapear in less of 2 years.

    I imagine how will look the review: AMD+PhysX = 21fps, Nvidia+PhysX 62fps ..
    ... lol ... what the performance are bad? Ooh why? so strange...

    Seriously, why will you AMD going in this trap ?

    Then you will ask, why Nvidia don't give cuda too to AMD?

    Do you really think Nvidia will give CUDA to AMD when it will completely destroy the agressive placement of Nvidia in the computing and HPC, professional sectors, allowing AMD to be compatible with CUDA computing software, and HPC? A segment where Nvidia with CUDA have make the difference and is surely more of 80% of their revenue ?

    This bull**** about the offer of PhysX to Nvidia was a pure marketing thing.
    " - Look our PhysX engine is so good, we want to offer it to our competitor "
    ", yes we do it proprietary, but this is not our fault, it's AMD who don't want it, so if you want it, buy nvidia cards. "

    Please don't be so naive.
     
    Last edited: Feb 27, 2011
  8. TheHunter

    TheHunter Banned

    Messages:
    13,404
    Likes Received:
    1
    GPU:
    MSi N570GTX TFIII [OC|PE]
    lol dude dont be so pessimistic, give it a chance its only 1-2 years under AMD's hood.


    And fyi physx is nothing, it never really deserved to be called physics.



    besides nv is crippling it on purpose, yea really great stuff right there.. :rolleyes:
     
  9. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock

    Sorry but PhysX, in term of possibility is not nothing... but and i think it was your point, it can't be used as it should be ... and i will add it's a physic engine, so it work like other physic engine, but the fact is how Nvidia try to use it... again by proprietary his hardware, and use a way of coding it, only Nvidia hardware can use it, they have make impossible to use the full potential of PhysX. ( one reason more of why the director of Ageia, Nvidia cuda and physX team have leave the boat ).

    Marketing, market share, financial have take the hand on it. It's sad, cause it can have been a nice solution for games if Nvidia have not try to use it for just sell more gpu'S, They can have sell it as a new engine to game developpers, and with the competition it will have bring, we will have surely see, all physic engine actors start to push this competition on.

    ------------------------

    For maybe add to what i said on previous post, for don't misunderstood me... PhysX is a really capable engine, it is really good at what it do . hard to say if it is better of other, but anyway it equal them. But sadly Nvidia have choose a way, where this engine can't be used widely.. He have choose to use it as a marketing point, instead of sell it for the quality it get. They could have make surely a lot of money with it, by selling it just as a physic engine to game developpers, instead of it, they need pay developper for use it... completely absurd.

    Why Intel can sell Havoc to game developpers ?( and it's not cheap at all ), and Nvidia need to pay developper for use PhysX ? it's not cause the engine is bad ( again it's really good ), but just cause they use clause for use it who don't match what want game developpers... The problem here, is not PhysX, it's sadly nvidia .. ( i like nvidia, and i like their product, but their marketing and management is really bad, like offtly in this type of industry, they think consumers are chicken, and we will eat all the *** they throw at us.. sorry im not right with that .. )
     
    Last edited: Feb 27, 2011
  10. mohiuddin

    mohiuddin Maha Guru

    Messages:
    1,008
    Likes Received:
    206
    GPU:
    GTX670 4gb ll RX480 8gb
    The only cause of that poor growth of phyx is 'nvidia made it to be proprietary and using it to sell their gpus'
    whereas amd is trying to develop an open platform...
    I appreciate it and u all should.


    Well said @Lane and @TheHunter....

    R u sure crysis2 isn't using phyx ??or any CUDA exclusives??just asking....

    When we can expect heavily gpu accelerated, gameplay changing implementetion of bullet physics?:(
     

  11. CronoGraal

    CronoGraal Ancient Guru

    Messages:
    4,194
    Likes Received:
    20
    GPU:
    XFX 6900XT Merc 319
    we'll prob have to wait for the xbox1080/ps4 before such games are released
     
  12. Death_Lord

    Death_Lord Guest

    Messages:
    722
    Likes Received:
    9
    GPU:
    Aorus 1080 GTX 8GB
    He just explained that PhysX as many other cuda based software will end up getting ported to OpenCL, that means that any hardware that is compatible with OpenCL will be able to run PhysX, now, that doesn't mean that an Ati card will perform better than a Nvidia card for PhysX.
     
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    This is the way I see it.

    In a time when there were no platforms for GPU development, Nvidia created CUDA. It was a proprietary platform for applications development on Nvidia hardware (Really no different than AMD's proprietary CTM platform). Nvidia had always been in the forefront of programmable GPU's, starting with CG/HLSL, CUDA was the next step forward. Nvidia bought Ageia, because they saw a killer product in the idea of having physics being accelerated by their products on their new CUDA platform.

    You have to remember, Nvidia bought Ageia and began the port of PhysX at the same time it was developing OpenCL with Apple/AMD/etc. When Nvidia bought PhysX, it didn't rewrite the API, it simply ported it to the CUDA architecture. OpenCL wasn't ratified for a couple months later and it wasn't even integrated into GPU's until the following year.

    It's not like Nvidia said "were buying PhysX and porting it to CUDA not OpenCL" it's more like they didn't have a choice between the two.

    Now Nvidia is rewriting the entire PhysX API from the ground up. They haven't specifically stated whether or not they are going to port it to CUDA or OpenCL, but I wouldn't doubt the latter. For the last several years, or really since the development of the G80 core, Nvidia has been targeting it's hardware towards parallel computing rather then specifically focusing on gaming. So if PhysX was ported to OpenCL, Nvidia would most likely still have a performance advantage over AMD.

    As for your point Deltatux. If nvidia did port PhsyX over to OpenCL, who would have the advantage in the mobile market? A market where AMD has absolutely no products and just basically sacked their CEO for not having any plans what-so-ever for the market? A market where Nvidia is receiving design wins left and right and is literally multiple iterations above their competition. But I think you're right. I've always felt that in order for OpenCL/OpenGL to catch on in a major way, there would need to be some killer reason to get developers who are embedded in DirectX development to switch, maybe the killer reason is the mobile market itself.

    Don't get me wrong, I would absolutely love to see Bullet Physics catching on and being awesome, just so there would be more competition and in the end better games. But it really looks to me like Nvidia has a leg up over the competition and if they play their cards right with the rewrite of PhysX they can be in a dominate position.

    A lot of people also make it sound like Nvidia is being evil with it's proprietary designs and whatnot, when they were the ones actually pushing the envelope for the longest time and doing the only thing they could do. It's not like Nvidia had an option to port PhysX to OpenCL when they bought Ageia, and with a complete rewrite coming, why would they port the old code now?
     
    Last edited: Feb 27, 2011
  14. TheHunter

    TheHunter Banned

    Messages:
    13,404
    Likes Received:
    1
    GPU:
    MSi N570GTX TFIII [OC|PE]
    they're not gonna rewrite it, they are trying to make it SSE2 compatible but if they do make it 100% SSE2 multi core aware, that then there is no need for gpu physx anymore is there?

    It could ran of cpu without a issue, including Cryostasis (gpu lowest 35fps vs cpu 18-23fps all the time - now atm with x87 code)..


    Like i said they're "crippling" it on purpose so they can sell more gpus for physx., greed fails in so many ways no wonder its getting obsolete..



    And this: nv good luck with this dead end std. you're gonna need it.
     
    Last edited: Feb 27, 2011
  15. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Oh boy.

    In no way did Nvidia cripple anything. The original codebase was written in x87, Nvidia bought it that way from Ageia. Nvidia just simply didn't rewrite it to use SSE instructions until 3.0, which is a rewrite considering most of the code base is on x87 and it needs to be changed. I mean really, it's only been two and a half years since Nvidia has had the code and most of that time was spent devoting their resources to making it usable on consoles.

    This isn't even to mention the fact that developers themselves have the ability to make the code multicore aware, they just haven't chose too because most developers simply don't care about optimization on the PC and probably rightfully so, same with Nvidia. Why would a company want to spend money rewriting code to make it more efficient on a platform that has the least sales? Especially when there are much larger bottlenecks in other areas of the code that effect the larger base? Like memory issues on consoles?

    You need to take your tinfoil hat off.
     

  16. Lycronis

    Lycronis Maha Guru

    Messages:
    1,046
    Likes Received:
    0
    GPU:
    Gigabyte GTX 970 G1
    They ARE rewriting it from the ground up and it will be many times more optimized than it is now. You are correct in stating that it will work better with multi-core cpu's but it will also work magnitudes better on the gpu side of things. And why blame nvidia (or Havok, etc. for that matter) for lame physics acceleration in games when it's actually the game developers that are not using any of this tech like they should/could? And my guess is that the reason developers aren't using any of it for anything substantial is because most game development is now lead with the consoles instead of the PC which pretty much negates any possible hardware advantage the PC may have, including hardware accelerated physics (from nvidia or otherwise.)
     
    Last edited: Feb 27, 2011
  17. TheHunter

    TheHunter Banned

    Messages:
    13,404
    Likes Received:
    1
    GPU:
    MSi N570GTX TFIII [OC|PE]
    ^
    i doubt it will be that faster, from what i've seem it should be around 20-30% more then what it is now.. and those devs that used HW physx, they used it only because nvidia bribed them will tons of money, otherwise no one would take a sniff at it.. its even mentioned in this article one or two times..

    lol yea right and you need to take off yours..

    The point is its still inefficient even when running of gpu, not to mention multi core path of the code - yea there is none.. and what only 2 years till it got obsolete, yes you're right they took their time alright..


    MafiaII; I blew up the car, its actually the first physx interaction in this game and when i blew the car fps drops from 60 to 40-45fps (it was rendering car smoke etc..)

    But here is the "good" part, i drove away across the city to another part of the town and would you look at that, fps didnt change as its still rendering that blown up car, YES across the town no where near that car, wow amazing indeed.:bang:

    The $hit doesnt even know when to stop rendering it when it needs to, just like in UT3 physx map all the stones keep moving even thought they dont have too, thous slowing down the scene..

    but Bullet has thins thing cowered, it stops rendering that object as soon as it stops moving or it predicts the impact and moves only those that might be affected.

    see here how it colours it to blue when it tries to contact it or predicts the impacts and colours those that might move..
    http://www.youtube.com/watch?v=Yr4ejzt-z4w

    http://www.youtube.com/watch?v=ejWIi1KZ-Ng

    This is efficiency and perfect symmetry, not physx.


    //Anyway im out of this thread, i have a feeling its gonna turn into another physx fanboy thread.
     
    Last edited: Feb 27, 2011
  18. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    From what you seen? What have you seen? Everything i've read about 3.0 i n the last hour or so points to the fact that no one has seen anything.

    Please show me proof where Nvidia bribed a company with money. My friend is a developer of an independent game development company who recently integrated PhysX into their SDK for an upcoming title they're working on. He's a designer so he doesn't work directly with it, but from what I've heard it's been a godsend for them over their previous in-house system. The SDK for PhysX is supposedly excellent. So I personally think you're full of it.



    PhysX is capable of multicore, it just doesn't automatically scale, the developer needs to do it themselves.

    Your example of Mafia doesn't even make sense. If it stopped rendering the car mid explosion when you left the scene, or turned around, what the hell would it do when you turned back around or came back to the scene of the car explosion? Would the particles still be floating in air? Would they be on the ground? How would they be on the ground if there was no physical guidelines telling them how to fall because the calculation to do that stopped? If you're talking about the rendering of the actual scene, PhysX has absolutely nothing to do with scene rendering, period. A developer can very well keep the simulation of the car running while stopping the actual scene render when the explosion is not in view of the camera. It has nothing to do with PhysX and everything to do with the developer knowing how to optimize their engine.

    You can also set up the same thing in the PhysX SDK, you can stack a **** ton of particles into the scene and blow them up, the scene only starts lagging when the direction of the particles is being calculated, as soon as they reach their final destination the scene beings to speed up again because calculations are no longer being taking place on the effected particles. Even Havok has this, it's nothing special.

    Speaking of Havok, I personally believe it's the best system. I haven't seen anything PhysX or the demos of Bullet could do that Havok couldn't.

    Hell this in 2006: http://www.youtube.com/watch?v=x3CKRQJFsrQ&playnext=1&list=PL6B908C634DDA978F looked way better than anything I've seen from PhysX/Bullet now. Even though FX got scrapped, it was super impressive for it's time.

    Even the Meteor demo shown off on Project Offset engine was awesome.

    http://www.youtube.com/watch?v=5cLOLE9Tn-g&feature=fvwrel
     
    Last edited: Feb 27, 2011
  19. Xendance

    Xendance Guest

    Messages:
    5,555
    Likes Received:
    12
    GPU:
    Nvidia Geforce 570
    UE 3 doesn't render anything that isn't inside your view frustum.
     
  20. zer0_c0ol

    zer0_c0ol Ancient Guru

    Messages:
    2,976
    Likes Received:
    0
    GPU:
    FuryX cf
    cant they just all get along a make a unique "one" standard that everyone will use and all be happy..

    just take the best from all of them :)
     

Share This Page