Guru3D.com Forums

Guru3D.com Forums (http://forums.guru3d.com/index.php)
-   Videocards - AMD - ATI Drivers Section (http://forums.guru3d.com/forumdisplay.php?f=46)
-   -   PhysX for ATI (http://forums.guru3d.com/showthread.php?t=362219)

AdmiralJanovsky 04-29-2012 14:33

PhysX for ATI
 
HI,

i have a problem with Nvidia PhysX. i have ATI 6770M and problem is whenever there is a game that requires PhysX the lag when particles start flying around is enormous.
i read about moddified drivers but cant seemed to find any newer that will work.
i played mirrors edge and started noticing this problem, but there you can switch off and is fine. but now im playing binary domain and there is no option for this and when big fights start the game lags tremendously.

can anyone help me with this problem?

thanks.

SaiBork 04-29-2012 14:42

PhysX is nVidia only (or PhysX card, but those are old and you wont have one).

You will need to play with PhysX off if the only card you have is the ATI 6770M.

The things you have been reading about, it most likely when people have both an ATI (as main card) and an extra nVidia card with it. That way the ATI is for everything apart from PhysX and the nVidia is used as PhysX only card.

It's sad, but true. Hopefully someday AMD (ATI) will have proper PhysX possibilities and we stop this annoying different parties using different options...

thatguy91 04-29-2012 14:51

Its not quite as simple as that. AMD would have to pay Nvidia for Physx, that's not going to happen!

The real reason why Phsyx runs so slow if you don't have a Nvidia card is because its running on the CPU, and the code is deliberately written to run crap on CPU. Until very recently it was pure x86 code, no SSE or later instruction sets. Not only that, the x86 code wasn't very optimised. Practically all games that use Physx use this poor Physx code. The very latest Physx does have some SSE2 code, but it still is by no means highly optimised - far from it. If Physx was written properly for CPU its suggested on CPU it would outperform GPU. Sure, GPU is potentially better for this, but the GPU is also doing the graphical side of things! So, you either take away performance from the graphics, or you have specialised hardware for it - which goes unutilised most of the time.

XBEAST 04-29-2012 15:33

Yep, it's slow because it's running on CPU and you can't run it on AMD GPU. But some games can be tweaked to give better results. For example, Mafia II and it's cloth crap :P.

AdmiralJanovsky 04-29-2012 16:21

well i have switchable graphic cards with Intel 3000 HD. but i cant use both for playing. how do you turn it off?
is the performance worse if you uninstall physX and play games or will they crash because of the unknown error? some games have the option to turn it off but this Binary Domain doesn't have it and its annoying when it lags even when 2 object start flying around.

do you have any advice on how maybe manually disable or something?

thanks again.

bighead147 04-29-2012 16:47

you need a physx in games where is physx in use. Without,you will get error ;)

teleguy 04-29-2012 16:51

Are you sure Binary Domain actually uses Physx? I can't find any information that it does on the web.

XBEAST 04-29-2012 16:52

Binary Domain doesn't even use PhysX. Problem is elsewhere. You can try to force Binary Domain use 6770M by disabling Intel GFX via CCC (I think it's in CCC, but not sure, should be called Switchable Graphics or something).

GhostXL 04-29-2012 16:56

You can use Nvidia Physx with AMD/ATI.

Google ATI Physx hack.

teleguy 04-29-2012 16:59

Quote:

Originally Posted by GhostXL (Post 4305086)
You can use Nvidia Physx with AMD/ATI.

Google ATI Physx hack.

He's got a laptop so that's probably not an option.

sykozis 04-29-2012 17:02

Quote:

Originally Posted by thatguy91 (Post 4305022)
Its not quite as simple as that. AMD would have to pay Nvidia for Physx, that's not going to happen!

The real reason why Phsyx runs so slow if you don't have a Nvidia card is because its running on the CPU, and the code is deliberately written to run crap on CPU. Until very recently it was pure x86 code, no SSE or later instruction sets. Not only that, the x86 code wasn't very optimised. Practically all games that use Physx use this poor Physx code. The very latest Physx does have some SSE2 code, but it still is by no means highly optimised - far from it. If Physx was written properly for CPU its suggested on CPU it would outperform GPU. Sure, GPU is potentially better for this, but the GPU is also doing the graphical side of things! So, you either take away performance from the graphics, or you have specialised hardware for it - which goes unutilised most of the time.

Until recently, PhysX was written purely in x87 code. Completely different from the x86 instruction set (which isn't capable of floating point operations necessary for PhysX).

Rich_Guy 04-29-2012 17:59

Even Nvidia's own cards take a hit when running PhysX.

AdmiralJanovsky 04-29-2012 21:53

does it really not use PhysX? i swear it wanted to install PhysX at the end of installation. and the other things is really laggish whenever anything gets destroyed and only then so i ques the PhysX is in order. i dont know thx anyway because the same problem occured when tryinf to play mirrors edge and it lagged ONLY when particles started to fly through the air when i turned it off, all played smoothly.

thx anyway

GhostXL 04-29-2012 22:00

Some games need physx installed no matter if you use it or not.

Valagard 04-29-2012 22:03

Any Unreal 3.0 and up game uses PhysX, its built into the engine code

That said, the amount the unreal engine uses is next to nill, typically it only animates 20-30 objects at a time, which any CPU can do, even old old 1.8ghz dual cores

The problem is when running older games with heavy PhysX use, and its using a version of physx older then version 3.0, which was written in x87 code and only allows one thread. When you have 4000-8000 objects on screen, this causes severe slowdown

Lower PhysX effects in options if this is the case, or buy a Nvidia card if you desperately need it

And Nvidia said they wouldn't charge AMD to use PhysX, its just that Nvidia said they wouldn't give AMD the source code to physx, and "They would write the physX drivers" for AMD. Given Nvidia's track history of absolutely crippling CPU physX over GPU physX just to sell videocards, AMD wisely told them no

teleguy 04-29-2012 22:31

Quote:

Originally Posted by Valagard (Post 4305347)

The problem is when running older games with heavy PhysX use, and its using a version of physx older then version 3.0, which was written in x87 code and only allows one thread. When you have 4000-8000 objects on screen, this causes severe slowdown

Even older PhysX versions have multithreading support however it's up to game developers to implement it.

thatguy91 04-30-2012 10:36

Quote:

Originally Posted by sykozis (Post 4305093)
Until recently, PhysX was written purely in x87 code. Completely different from the x86 instruction set (which isn't capable of floating point operations necessary for PhysX).

Yes my mistake! x86 is integer, x87 is floating point. Still, it does mean poor performance on CPU whichever way you look at it!

Valagard 04-30-2012 12:54

Quote:

Originally Posted by teleguy (Post 4305372)
Even older PhysX versions have multithreading support however it's up to game developers to implement it.

Older PhysX x87 could only be threaded for as many logical cores you had, so a 2600K for example could only run 8 threads, this resulted in low performance

PhysX past 3.0 can be threaded thousands of times across as many logical cores you had as needed because its written in SSE2, as it scales on load, so 8 logical cores of a 2600K could be running 10K threads at the same time

vejn 04-30-2012 13:28

Why doesn't ATI developing similar GPU software like Nvidia ?
Also why there isn't SSAO option for ATI cards which Nvidia has ?

Valagard 04-30-2012 14:08

Quote:

Originally Posted by vejn (Post 4305788)
Why doesn't ATI developing similar GPU software like Nvidia ?
Also why there isn't SSAO option for ATI cards which Nvidia has ?

SSAO is a game option, and has to be supported by the engine

Both AMD and Nvidia can do it

thatguy91 04-30-2012 14:22

Quote:

Originally Posted by vejn (Post 4305788)
Why doesn't ATI developing similar GPU software like Nvidia ?
Also why there isn't SSAO option for ATI cards which Nvidia has ?

ATIStreamSDK :)

OpenCL makes more sense in the long term than Cuda/Physx, but is much less developed. Same goes for Directcompute (Microsoft). Nvida, AMD, Intel etc all support OpenCL and Directcompute.

Valagard 04-30-2012 14:27

Quote:

Originally Posted by thatguy91 (Post 4305806)
ATIStreamSDK :)

OpenCL makes more sense in the long term than Cuda/Physx, but is much less developed. Same goes for Directcompute (Microsoft). Nvida, AMD, Intel etc all support OpenCL and Directcompute.

OpenCL is damn well developed, its faster then Cuda by around 15%

But the downside is that Nvidia openly sponsers/pays developers to use Cuda, and companies are more likely to write software only for what they are getting paid for

kn00tcn 05-01-2012 05:10

Quote:

Originally Posted by Valagard (Post 4305796)
SSAO is a game option, and has to be supported by the engine

Both AMD and Nvidia can do it

AO can be done by anyone on any platform, it's just a post process that calculates things based on the depth buffer, doesnt require 'hardware features' or anything, it's not like tesselation

in fact check this out, AO in DOS on the cpu at 256 BYTES http://www.pouet.net/prod.php?which=53816

which means... if the game doesnt have such a feature, nvidia has to always know the depth buffer value & inject some AO onto the image (bit of work that the amd team probably doesnt have the man power to set aside)

unless there's a simpler way... with injectors, didnt ENB or ICE add AO to GTA4?

sykozis 05-01-2012 05:37

Quote:

Originally Posted by thatguy91 (Post 4305725)
Yes my mistake! x86 is integer, x87 is floating point. Still, it does mean poor performance on CPU whichever way you look at it!

Figured it was a typo, but wanted to make sure people have the right info.

Quote:

Originally Posted by Valagard (Post 4305781)
Older PhysX x87 could only be threaded for as many logical cores you had, so a 2600K for example could only run 8 threads, this resulted in low performance

PhysX past 3.0 can be threaded thousands of times across as many logical cores you had as needed because its written in SSE2, as it scales on load, so 8 logical cores of a 2600K could be running 10K threads at the same time

Older PhysX is limited to a single CPU core as it's bound by the restrictions placed on the x87 instruction set by Intel....which only allows for a single thread using x87 instructions to run at a time.

x87 was actually moved to "legacy support" prior to Ageia coming into existance. They chose to use x87 for PhysX to give their PPU an advantage due to having the intent of selling PhysX after it was established. nVidia actually had no part in gimping PhysX when run on CPU....that was done by Ageia. nVidia was just in no hurry to correct the situation.

kn00tcn 05-01-2012 06:01

just cuz something uses physx doesnt mean it's designed for gpu physx with a ton of particles, that's only a (relatively) small amount of games

most physx usage in games is on the cpu for both nv & ati, identical to the console version of such a game (just like havok, etc)


All times are GMT +1. The time now is 13:45.

Powered by vBulletin®
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright (c) 1995-2014, All Rights Reserved. The Guru of 3D, the Hardware Guru, and 3D Guru are trademarks owned by Hilbert Hagedoorn.