What's new

I rarely post, but this was so damn funny...

Eagle

aka Alshain
Moderator
generalplot said:
As for the architecture, Intel has ALWAYS had the same basic design: the CPU gets more improved of course, this makes it necessary to change socket desgins. But the need for an Intel chipset onboard and how that chipset and the CPU interact has always been the same. And this means the basic flow of instructions has always been the same, no matter how many times the apparent architecture gets changed, the basic desgin is always the same, and again it's one I have known for a long time.

No offense man, but that doesn't make a whole lot of sense. I think you may be using the wrong terms or something. The architecture refers to the type of chip it is (x86, amd64, sparc, ppc, ppc64, hppa, etc.) All of AMD's current consumer processors are x86 or amd64. All of Intel's current processors are x86. With the exception of 64 bit, this hasn't really changed since the 386 (the first 32-bit) computers came on the market.
 
Last edited:

General Plot

Britchie Crazy
Yeah, I'm using the wrong words. :p I'm basically describing the difference in the instruction sets that each chip has, and not the architecture. Talking too fast without thinking of the words being used. It's a bad habit.
 

Eagle

aka Alshain
Moderator
clowns789 said:
I'm just sticking with my 2.4 gig 4600C waiting for cell processors.

Your gonna be waiting a long time before they become mainstream. Cell is designed to be a terminal controller. The idea behind it is to allow large servers to run a business. Instead of everyone having their own computer, everyone will have a dumb terminal and run a new instance of the operating system on the server. I don't expect home users to be even seeing a glimpse of Cell for at least 10 years.
 

Eagle

aka Alshain
Moderator
generalplot said:
Yeah, I'm using the wrong words. :p I'm basically describing the difference in the instruction sets that each chip has, and not the architecture. Talking too fast without thinking of the words being used. It's a bad habit.

Well, still the instruction sets that are used by Intel are also for the most part used by AMD. AMD even has a patented one that Intel doesnt. The main instructions sets to come out so far (that I know of) are MMX, SSE, SSE2, SSE3, and 3DNow. Intel and AMD uses all of those except 3Dnow which only AMD uses.
 

Doomulation

?????????????????????????
Eagle said:
Well, still the instruction sets that are used by Intel are also for the most part used by AMD. AMD even has a patented one that Intel doesnt. The main instructions sets to come out so far (that I know of) are MMX, SSE, SSE2, SSE3, and 3DNow. Intel and AMD uses all of those except 3Dnow which only AMD uses.
Does there not exist 3DNow! Pro or something? I think I've seen that instruction set somewhere. It's supposed by the newest amd processors... Don't know from where, though.
 

cooliscool

Nintendo Zealot
3Dnow! Pro and other evolutions of 3DNow! are simply like SSE is to SSE2, small evolutions with different instructions to go along with anything before it.

3DNow! isn't nearly as efficient as SSE/SSE2/SSE3, however.
 

jdsony

New member
While I have been an AMD guy for quite sometime (more bang for the buck). I can acknoledge the fact that generalplot has proven his P4 2.4c is a force to be reckoned with. Since AMD uses made up numbers to state their performance, comparison isn't always an exact science. The original 3200+ rating came out before the Intel 800mhz fsb I believe. It also seems that AMD was misrating their higher clocked processors in comparison with Intel (the P4's generally went up by 200mhz and the AMD's ratio of increase wasn't always even to that of the P4). The clock speeds between models were so small it has always been a lot more cost effective to get a lower clocked model. Comparisons will still be made and Intel will still win in a number of them, but really it's partly AMD's fault for overestimating some of their ratings (this isn't saying AMD is better it's just saying that since AMD's models are based on theoretical numbers there is room for error yet with Intel the performance increase is more predictable).

For me buying AMD has been more about getting something fast for very little money. I'm not poor but I don't want to go spending double for a PC that might only be 20% faster when I have too many other interests that cost money as well (this extra cost isn't just buying Intel CPU's of course I'm talking about buying higher end video cards, ram etc.) The price difference between Intel and AMD has closed quite a bit recently so it's not as much of an issue. Low end Athlon 64 cpu's are quite cheap though and are definitely one of the best choices for a midrange system at the moment.

If you have been stuck using one brand for a while it's really hard to give out an unbiased opinion or even to imagine what using the other would be like. I use AMD's at home but only Intel's at work. I find the Intel's at work to sometimes be slower than I would expect but I can't claim they are slow because they are IBM which are probably slower, ram which is probably slower, video cards etc etc. that would all be slower than an enthusiast built system. The main reason that AMD outperforms Intel Clock for Clock is because of the inefficient P4 architecture (slower, hotter, power hungry but gets the job done). If you look at the Pentium M's (based on the P3) you will see that they are very efficient processors.
 
Last edited:

Eagle

aka Alshain
Moderator
jdsony said:
While I have been an AMD guy for quite sometime (more bang for the buck). I can acknoledge the fact that generalplot has proven his P4 2.4c is a force to be reckoned with. Since AMD uses made up numbers to state their performance, comparison isn't always an exact science. The original 3200+ rating came out before the Intel 800mhz fsb I believe. It also seems that AMD was misrating their higher clocked processors in comparison with Intel (the P4's generally went up by 200mhz and the AMD's ratio of increase wasn't always even to that of the P4). The clock speeds between models were so small it has always been a lot more cost effective to get a lower clocked model. Comparisons will still be made and Intel will still win in a number of them, but really it's partly AMD's fault for overestimating some of their ratings (this isn't saying AMD is better it's just saying that since AMD's models are based on theoretical numbers there is room for error yet with Intel the performance increase is more predictable).

I think you have that backward. Intel's been known to adjust their clock speeds. In order to compete AMD had to find a way to work around it, thus they use the 3200+ numbering style. The 3200+ is the rough equivilent of Intel's 3.2Ghz, even though the 3200+ is only 2.0Ghz. They had to do this, otherwise it would look as though you were comparing AMD's $165 2Ghz processor to a Intel $95 2Ghz processor. AMD would be out of business in no time.
 
Last edited:

General Plot

Britchie Crazy
But lately, I'd say it seems like AMD has gotten to be more "liberal" if you will, with the numbering style on their chips lately. And this always makes the Intel scientists pull their hair out. If going by the math a 3000+ was equivalent to that of a 3 Ghtz P4, then how come I came so close to it? Would seem to me that at least 3 seconds should have been the difference, and not just 1. I'm willing to bet if I had some Corsair XMS memory (at a latency of 2 clocks, where mine is plain ol' Wintec Industries at a latency of 3 clocks) that I could probably shave of a second or two and possibly beat that time. BTW jdsony, thanks for the comment. I'll admit that I've used some AMD systems (though not in a while) where they quite simply screamed.
Edit: yeah Jaz, who saw this much length coming? o_O :p
 

jdsony

New member
Eagle said:
I think you have that backward. Intel's been known to adjust their clock speeds. In order to compete AMD had to find a way to work around it, thus they use the 3200+ numbering style. The 3200+ is the rough equivilent of Intel's 3.2Ghz, even though the 3200+ is only 2.0Ghz. They had to do this, otherwise it would look as though you were comparing AMD's $165 2Ghz processor to a Intel $95 2Ghz processor. AMD would be out of business in no time.

Not quite sure what your saying. All I was saying is that AMD changed their numbering system because their processors were at lower clocks yet could compare in speed to what Intel was offering at higher speeds. In doing that they are making up a number that compares to the intel mhz model. They could really give us any processor at any speed and say "This is the AMD equivilant of the 3.2ghz P4" For example if AMD released a 2.6ghz processor and called it a 3200+ it probably would have blown away the 3.2ghz P4 yet according to AMD it would have been the equivilant (but much better). Now if AMD's prices were the same as Intel's they may have been able to do that but they are providing a sometimes equal but not always processor that costs less. Does price maybe provide a better rating? Probably for some people.

Really there is no winner. Intel definitely has stronger ground in content creation but I'm not sure if that's due to the software (Photoshop, Lightwave etc.) being optimized specifically for Intel or Intel CPU's actually being better at that sort of thing (someone who knows a lot about the technical architecture probably knows the answer to that). AMD has been known to be slightly better in gaming (other than Quake 3) but that's not always the case.
Intel was the clear winner before the AMD K7 processors and then AMD was leading a bit up until and during the early P4's. The later P4's had the edge against the XP's and now the Athlon 64 and Athlon 64 2's seem to be better in general. It's a cycle much like what happens with ATI and Nvidia.

Whatever your happy with is fine, benchmarks between a lot of these processors may look like there is one clear winner but in actual use you may know you have the faster processor but likely you won't find happiness in the extra 3fps or 1000 fpu points over the competition. So unless someone is still using a Cyrix cpu I can't think of any reason to mock them :p
 

General Plot

Britchie Crazy
From that link come a few interesting shots. I will admit that the FX did win some of the benchmark tests, although not by much. So it shows that intel can still keep pretty close in performance. And in some cases, they win.
 

DOGG

New member
Yeah but you must remember that the majority of apps are optimised for intel chips. Also, intel won in few cases, usually by a tiny margin. But they did lose the majority of tests, often by quite significant amounts.
 

General Plot

Britchie Crazy
DOGG said:
Yeah but you must remember that the majority of apps are optimised for intel chips. Also, intel won in few cases, usually by a tiny margin. But they did lose the majority of tests, often by quite significant amounts.
None the less, in every benchmark they lost (to 64 bit cpu's let us not forget) the difference in most cases were not much. Just review it again and you'll see. Agan, as I stated before: I expect that the AMD 64's will take the crown for now, but nobody can take away from intel what it has accomplished thus far.
 

omnislash124

New member
Meh, I have a little bit of beef with Intel (or maybe it's Dell, but it's really gay right now). I just ordered a P4 2.8GHz computer from Dell to replace my Athlon XP 1800+. I must say, HyperThreading is really not all it's hyped up to be. Maybe it's the 256MB PC2-3200 RAM sucking ass, but my computer actually seems slower compared to my AMD 1800+ Here's a Pic I took when My computer seemed to have trouble closing down a few apps...Mind you, I was sitting here for about 4 or 5 Minutes before I finally had to close down AIM to do something....
 
Last edited:

Top