Xade said:
And your explanation for the clear reversal of benchmark results in Doom 3?
DOOM3 is fixed-point.
me said:
nVidia's FP16 isn't very much faster than FP32 (the only thing faster is register use!), while fixed-point (DX8 and lower) math is VERY fast. FP24 has nothing to do with it... NV3x has a broken floating-point shader, plain and simple.
It's literally the beefiest DX7-class game ever.
flow`` said:
i love your close mindedness when it comes to games tag. quake>doom doom>quake who cares. people play what people play. you can play the sims and i wouldn't really care.
I was responding to
this is a valid point because ONLY ONE of those 4 titles is out right now, and quake is just fine in it's original incarnation, and really great in FUHQUAKE, thank you very much.
because he mentioned Tenebrae.
Then again, I use Tomb Raider AOD as an example of GFFX's horrid floating-point shader performance. ^^;
oh, intel/amd.. well yeah, intel is doing a pretty good job at beating amd in most benchmarks. (and usually by with x86 chips). and that's been found on a lot of respected sites
Usually with x86 chips? Yay, so occasionally they aren't even x86...
your ati fanboy.. er.. girl attitude comes through nicely. oh, fuck hl2

gg april '04.
i could care less about that extremely hyped up the ass game.
So? It shows how awful GFFX floating-point shader performance is.
I don't like HL myself, nor do I like Tomb Raider, but both of the new games clearly show the FX line sucking horribly with real DX9-class feature usage.
they used the old 45.23 drivers in those comparisons. why not read yourself a review of the newer 51.xx's and 52.xx's on _released_ games they've benchmarked. the 5900 is right up there on almost all benchmarks and slightly behind ati's 9800xt. nvidia is right up with now with their drivers and have only been getting better the past few revisions.
Nice to know people don't read my posts anymore. From my PREVIOUS POST IN THIS THREAD:
me said:
wrt the 50 series, did you see Valve's slide on what was wrong with them? They had about 10 separate bullet points on the cheats it introduced to boost performance, including adding clipping planes to timedemos (which doesn't benefit in-game performance AT ALL and never can without potentially destroying the visual quality).
And according to Driverheaven, they STILL HAVE THAT STUPID UT2003 AF cheat... and in fact, it now applies to all Direct3D games, not just UT2003. Wow, the det50's are so amazing...
If I had the time to right now I could find that slide for you. It's really quite damning.
also, for the record. ati paid 8 million to bundle hl2 with the 9800xt cards. maybe that's why nvidia got blasted the way they did? if nvidia had put out the money i don't think you would've seen those slides released, and instead some nice nvidia-favored benchmarks and anti-ati slides.
Mmmhmm. Please look up WHEN ATi paid that 8 million. Hint: It didn't happen until after most of the serious engine work was already done. In fact IIRC it was even after most of those big HL2 performance tests appeared.
doom 3 is opengl with a little dx7? in the mix i believe. nvidia, having superior ogl drivers should come out ahead.
Look up.
Also, wrt DOOM3, there's another factor: Stencil shadows.
If you'll remember, the GFFX line basically has a 4-pixel, 2-texture-per-pixel architecture just like GeForce4Ti... but when doing only Z calculation (no colour), it acts like it has 8-pixel, 1-texture-per-pixel (like the DX9-class Radeons other than 9600 line and 9500 non-Pro).
DOOM3 does several entire passes without actually drawing anything.
Basically the GFFX line is a DOOM3 Accelerator. =)
But um... it isn't the OpenGL driver that's doing the trick, it's Carmack's massive time and effort.
Carmack HIMSELF (quote shouldn't be hard to find) said that nVidia's cores were about HALF as fast as the equivalent ATi parts when using the 'standard' OpenGL w/ floating-point shaders path ("ARB2"). FX line is only faster when it uses Carmack's vendor-specific path.