PDA

View Full Version : TressFX super demanding! Optimise?



cakefish
5th Mar 2013, 02:29
Hope it can be optimised in future patch and driver updates. Hair looks great (apart from when in contact to shoulders which shows it floating). Eats up to 30fps though. Nasty. Literally halves framerate at whatever graphic setting. NVIDIA with latest driver. Please optimise as much as possible! :)

Martelol
5th Mar 2013, 02:54
There might be a little optimization, but hair simulation like this is extremely demanding by its very nature. It's there for those with very high-end systems. Not much that can be done about it.

beavermatic
5th Mar 2013, 06:03
The reason is tressfx is optimized for direct compute technology found in AMD graphics cards. Its their version of cuda.

All nvidia users are expericiencing fps drops with tressfx enabled. Nvidia either needs to add better support for direct compute to cuda emulation in their drivers or the game developers need to add a cuda support mode for tressfx.

If tressfx was optimized for cuda, it would be AMD users complaining. Both cuda and direct compute perform similar advanced computing GPU functions, but have different APIs (programming languages)

Should push on both nvidia and developer to add better tressfx support for nvidia cards. In the meantime... switch hair mode to next lower setting.

Also... sli mode causes tressfx hair phyics to act strange vs hair on a single GPU running tressfx. Sli also appears to have some issues if tesselation is enabled and game crashes or lookups on latest beta driver.

namitokiwa
5th Mar 2013, 07:32
Hope it can be optimised in future patch and driver updates. Hair looks great (apart from when in contact to shoulders which shows it floating). Eats up to 30fps though. Nasty. Literally halves framerate at whatever graphic setting. NVIDIA with latest driver. Please optimise as much as possible! :)

I have found out that the DLC clothe for Lara is the reason why the hair floating. :D :D
jpogit_2013-03-07_00045.jpg

namitokiwa
5th Mar 2013, 07:38
The reason is tressfx is optimized for direct compute technology found in AMD graphics cards. Its their version of cuda.

All nvidia users are expericiencing fps drops with tressfx enabled. Nvidia either needs to add better support for direct compute to cuda emulation in their drivers or the game developers need to add a cuda support mode for tressfx.

If tressfx was optimized for cuda, it would be AMD users complaining. Both cuda and direct compute perform similar advanced computing GPU functions, but have different APIs (programming languages)

Should push on both nvidia and developer to add better tressfx support for nvidia cards. In the meantime... switch hair mode to next lower setting.

Also... sli mode causes tressfx hair phyics to act strange vs hair on a single GPU running tressfx. Sli also appears to have some issues if tesselation is enabled and game crashes or lookups on latest beta driver.

I am using Nvidia EVGA GTX 660 Ti SC without problems about Tress FX
Lara is so beautiful:
jfidbb_2013-03-07_00040.jpg

Martelol
5th Mar 2013, 10:38
The reason is tressfx is optimized for direct compute technology found in AMD graphics cards. Its their version of cuda.

All nvidia users are expericiencing fps drops with tressfx enabled. Nvidia either needs to add better support for direct compute to cuda emulation in their drivers or the game developers need to add a cuda support mode for tressfx.

If tressfx was optimized for cuda, it would be AMD users complaining. Both cuda and direct compute perform similar advanced computing GPU functions, but have different APIs (programming languages)

Should push on both nvidia and developer to add better tressfx support for nvidia cards. In the meantime... switch hair mode to next lower setting.

Also... sli mode causes tressfx hair phyics to act strange vs hair on a single GPU running tressfx. Sli also appears to have some issues if tesselation is enabled and game crashes or lookups on latest beta driver.

You're a bit off the mark here. Directcompute is not an AMD thing, nor is it "their version of cuda", it's a DirectX thing. Cuda is an Nvidia thing. There's a large performance drop on both manufacturers' cards. Think of directcompute as a vendor-agnostic version of cuda.