What's new

GeforceFX is no more?

Slougi

New member
There will only be a very limited amount of the higher speed gffx with the dustbuster. The slower gffx will still come out as planned.
This means nvidia has just given the performance crown to Ati voluntarily.
 
L

Lachp30

Guest
Meh...It's not the end of the world. Nvidia will come out with a new product soon.
 

Cyberman

Moderator
Moderator
If they are in allocation that likely means there aren't enough chips available it says nothing about if they are discontinueing the GFX. What it does mean is there yileds of this processor must be terrible. Allocation works like this, "you ordered so many of these last time we'll let you have this many ONLY this time". It's a method to control availability to sources that NEED them. Anyone who iisn't already a player in making NVidia chipset based cards can't get a CFX part.

The GFX suffers from a few things from the andantek review I read. Namely it's power draw is so great it requires you to use one of your hard disk plugs to power it's own power supply. From some computations it looks to use as much power as a P4 processor. This is way too much if you get the point.

If they can get the power consumption to 1/2 that of a P4 they wouldn't need a 1200fps fan on it to keep it cool. It's not very efficient in use of power in other words.

I'm sure they will trick out the problems with the chipset in about 6 months and it will be fine at that point. Right now the chip is WAY too much in heat to be useful in my opinion. I haven't seen the new Radeon toy from ATI but I understand it doesn't need a booster to get started. (sorry couldn't resist).

Perhaps ATI will keep the coveted high end position for a little while longer hmm? It's nice to see Nvidia is trying though :)

Cyb
 

Slougi

New member
Ah, but it doesn't matter at all whether it gets fixed in 6 months or not. It is already 6 months late.
 

AlphaWolf

I prey, not pray.
Why not just replace the dust buster with a more efficient peltier system or something? 'Course nobody is to say if that wont be what they'll end up doing, although it might raise the price of each unit $10 or so *shrug*.

I don't get why people spend more than $200 on a video card though. I mean, damn, do you realy need a stable 240 fps instead of a stable 90? Even though your eyes can't see anything passed 60? Dude, my dell laptop has a geforce 2 and I can still run UT2k3 at the max graphics options at 800x600 with a stable 40fps. It's 2 years old for christs sake and cost $140 extra.
 
Last edited:

gokuss4

Meh...
well we make faster graphics card for

1. keep the competition alive

2. money

3. for better anti-aliasing or anistrophic filtering methods and more speed for the new features.

probably the main reason why the GFFX is so loud is because of the Clock Speed. and the cooling system too probably...

I mean atm the Radeon 9700 Pro is the fastest card out there right now. Even though at it's FULL maximum capabilities a game can't run at a steady 60 FPS, and thats what they're basically doing now a days for video cards is building faster cards with more anti-aliasing and anistrophic filtering features to improve image quality and use best methods to not have much of a performance hit. I think video cards now should JUST rely on creating new methods of aa or anistrophy to have less of a performance hit and making the new cards little faster while they're at it. Don't worry too much about the Core Clock Speed, they should rely mostly on:

-Making faster GPU's, 512-bit is fastest right now (Parhelia)
-Better/More pipelines and texture pipelines in it
-Faster and more Video RAM
-Better/More Vertex shaders
-Better/More Pixel Shaders
-Creating new versions of their API for more and better options and better performance for the shaders and other stuff
-Other Features they could come up with.
-New methods of AA and Anistrophic Filtering for of a less performance hit and/or supporting stronger aa or anistrophy quality features
-Other New Features Video Card companies can come up with

DirectX 9 I think is sort of ahead of todays time by a couple of years. I don't think DX 10 doesn't need to come out till about 2 or 3 years from now.
 

Doomulation

?????????????????????????
AlphaWolf said:
I don't get why people spend more than $200 on a video card though. I mean, damn, do you realy need a stable 240 fps instead of a stable 90? Even though your eyes can't see anything passed 60? Dude, my dell laptop has a geforce 2 and I can still run UT2k3 at the max graphics options at 800x600 with a stable 40fps. It's 2 years old for christs sake and cost $140 extra.
Maybe because that's kindof ok for a good card nowadays and wish to be up-to-date for a while. When it sinks down into the mud; that's when they upgrade. A newer card takes longer to sink down into the mud than an old one.

Getting the newest available, however, is a real mistake, as their price drops pretty quick.
 

Slougi

New member
AlphaWolf said:
Why not just replace the dust buster with a more efficient peltier system or something? 'Course nobody is to say if that wont be what they'll end up doing, although it might raise the price of each unit $10 or so *shrug*.

I don't get why people spend more than $200 on a video card though. I mean, damn, do you realy need a stable 240 fps instead of a stable 90? Even though your eyes can't see anything passed 60? Dude, my dell laptop has a geforce 2 and I can still run UT2k3 at the max graphics options at 800x600 with a stable 40fps. It's 2 years old for christs sake and cost $140 extra.
Believe it or not, but I can _easily_ see the difference between 30, 60, and 120 fps. Just like you _cannot_ see more than 16 million colors but you still notice banding sometimes even in 32-bit mode. And just like you cannot hear beyond 44100 hz ;)
That said, I am running a gf2mx :p
 

AlphaWolf

I prey, not pray.
gokuss4 said:

DirectX 9 I think is sort of ahead of todays time by a couple of years. I don't think DX 10 doesn't need to come out till about 2 or 3 years from now.

Hehehe...OpenGL is already years ahead of directx 9, directx 10 still wont catch up with opengl either. If every opengl feature was supported in hardware right now, we would be way beyond directx 10.

Doomulation said:
Maybe because that's kindof ok for a good card nowadays and wish to be up-to-date for a while. When it sinks down into the mud; that's when they upgrade. A newer card takes longer to sink down into the mud than an old one.

Getting the newest available, however, is a real mistake, as their price drops pretty quick.

In conclusion, your best bet is to get a card that isn't necessarily the latest and greatest. You'll get a lot more for your money if you do.

Slougi said:
Believe it or not, but I can _easily_ see the difference between 30, 60, and 120 fps. Just like you _cannot_ see more than 16 million colors but you still notice banding sometimes even in 32-bit mode. And just like you cannot hear beyond 44100 hz ;)
That said, I am running a gf2mx :p

Well, speak for yourself :p I always complain about how crap quality MP3s are, yet everybody else insists to me that they sound perfect. In color depth, I see banding pretty much no matter what. In fps, I can notice 30fps to 60fps, but beyond 60 I can't notice any difference. Anything above 30 is acceptable to me however, because I feel comfortable with that framerate; Most theatrical presentations are 30fps.
 
Last edited:

thine_impalor

Local spammer
..

Hey i thought the 0.13 manufacturing process would actually REDUCE heat?? i mean thats y athlon needs more cooling than the p4 rite??

In conclusion, your best bet is to get a card that isn't necessarily the latest and greatest. You'll get a lot more for your money if you do.

yeah i think its better to get a DECENT card with a good price/performance ratio..instead of spending $300-400 on a vid card, spend $150-200 and maybe less, so when new features come with the next gen cards, u could just get another decent card again with a down-to-earth price...anyway, u wont really notice the difference except at insanely high lvls of AA which we dont really need! of course, i still notice significant difference in image quality at 4X when playing splinter cell...
 

Slougi

New member
0.13 micron reduces heat/area. However, the problem is that the size of the die is becoming so small that it is hard to cool. That why Durons are actually harder to cool than Athlons! Also TSMC's low k (low resistance) process was not ready and nvidia had to go with the high k process.
 

Tagrineth

Dragony thingy
AlphaWolf said:
Hehehe...OpenGL is already years ahead of directx 9, directx 10 still wont catch up with opengl either. If every opengl feature was supported in hardware right now, we would be way beyond directx 10.

Ha.

Officially? OpenGL isn't anywhere remotely close to THINKING about being as feature-rich as Direct3D. :)

It's only because of IHV-specific extensions that OpenGL supports shaders. :)
 

The Khan Artist

Warrior for God
Tagrineth said:
Ha.

Officially? OpenGL isn't anywhere remotely close to THINKING about being as feature-rich as Direct3D. :)

It's only because of IHV-specific extensions that OpenGL supports shaders. :)

How well does the Radeon 9700 support nVidia's OpenGL extensions?
 

Cyberman

Moderator
Moderator
Slougi said:
0.13 micron reduces heat/area. However, the problem is that the size of the die is becoming so small that it is hard to cool. That why Durons are actually harder to cool than Athlons! Also TSMC's low k (low resistance) process was not ready and nvidia had to go with the high k process.

Err.. no.. hehehe let me explain to what they mean by k.. k is the constant of capacitance of a material.. when they say low K you will see the word DIELECTRIC associated with it. Dielectrics are materials used as INSULATORs. CMOS is complimentary Metal Oxide Semiconductor. The most comon form of transistor in things today. A CMOS transistor has a GATE that is actually a capacitor. This is important because gate capacitance influences how much current is needed to drive a transistor at a given frequency. The higher the gate capacitance the greater the current and the greater the power. low K materials are needed to reduce the gate capacitance and thus increase the speed. High K (standard silicon oxide) materials have worked for the last 20 years just fine because the switching speeds and THICKNESS of the gate have been high. Now that gate thickness has decreased so much (.13 micron features gives a gate thickness of something like 20nm or something like that) the capacitance increases dramatically along with the leakage current of the gate. So it's a double damnation problem with the gate being the limiting factor of switch speeds. Low k materials are VERY difficult to work with, and in fact will give a much lower yield in terms of useful parts. At the rate intel has been ratcheting the speed of there parts, they are likely dealing with huge leakage currents and gate capacitance problems. They've offset this with a few tricks however. It will be interesting to see what happens when the silicon on insulator processes become more affordable.

They should have waited before releasing the GFX .. getting something working, and having a working product are two different things. :)

high k = high leakage current and high switching losses. I would say 1/2 the power is going into that right there. So.. I guess they wait tell the low k process is up and running ;)

Cyb
 

Top