Published in News

Nvidia marketing takes money from idiots

by on30 April 2007

ImageImage

Letters : R600 rocks you have no clue


Nvidia is just taking money from the idiots now when they still can. There's nothing fancy in the price. Nvidia has done it before and they know someone will buy those cards. So it doesn't make any sense to you but for Nvidia's marketing department it does. ;)

Geforce 8800 Ultra on the 2nd


________________

You say "...R6xx GPUs can easily be faster in future games like Crysis or Unreal Tournament 2007. This might only happen if these companies decides to go for the complex Shaders all of the time. So far no one made such a Shader so far..."

Of course, the whole point of the R600 is DX10 and games coming "in the future".

So, yes R600 looks less good when tested on old DX8 and DX9 stuff. Who cares? The R600 is plenty fast enough on those old games and benchmarks. Heck, the X1950XT for a lot less money is fast enough for the old games and benchmarks.

I would buy the R600 for the games being released the next two years -- not the games I am already playing adequately with my X1900XT.

Unfortunately, you only have existing DX9 games and benchmarks to use in comparing the R600 with the G8800. Nvidia knows that and always makes a top-end card that does well in "yesterday's benchmarks" -- at the expense of doing well in the games that will be released during the next year.

To put it another way, the R600 may be a bit slower than G8800 in DX9 games, but it is plenty fast enough that I could never SEE the difference (only measure it). However, when the most demanding DX10 games hit -- one will want to have an R600, I think.

The articles I read seem to emphasize the wrong benchmarks (DX9) -- get some DX10 benchmarks and games (that have not been optimized by or for Nvidia -- as most seem to be due to the "way its meant to be played" program).

It isn't about 16TMUs that the R600 seems slower than the Geforce 8800GTX Still wondering why a X1950XTX can beat a Geforce 7900GTX which works at the same core frequency but has 8 TMUs more?? and the same number of shaders!!!?

http://www.computerbase.de/artikel/hardware/grafikkarten/2006/test_nvidia_geforce_8800_gtx/13/#abschnitt_3dmark06

http://www.computerbase.de/artikel/hardware/grafikkarten/2006/test_nvidia_geforce_8800_gtx/2/#abschnitt_technische_daten

Knowing about the R600 has 64 shader units but is capable to do two shader operations at same time The Competitor has 128 shader units so whats about TMUs?

There is another Problem its not the core frequency its about shader frequencies The Geforce works at a shader frequency of 1350mhz and the WHOLE R600 works at 750mhz a Difference of 55%??!!!

Thats about it

http://www.dailytech.com/ATI+Radeon+HD+2900+XTX+Doomed+from+the+Start/article7052.htm

do you see?

a advantage of 55% of the shader frequency would show a more Competitive result

Company of Heroes 151fps against Nvidias 128.6fps thats it

never wondering why a 1950xtx has so much transistors? about 100 transistors more than the Competitor but why?

maybe its the Ultrathread Processor?!

allowing them to do it the same way like Intel with the Pentium 4 in later days does but in Parallel Processing it makes more sense Are the Geforce 8800GTX cards always working at full power??? or are some shader sleeping?

I think The Geforce 8 cards need much more Driver Optimization than the Ati cards for a Game because Ati has this Ultrathread Processor that give every shader something to do

Another thing is whats about power Consumption?

why does the R600 need so much Power?

The answer to this Problem is easy

Look at the Intel Polaris

http://www.computerbase.de/news/hardware/prozessoren/intel/2007/april/idf_intel_626_ghz_polaris_2_teraflops/

They save about 40% of the Power when they use some know how they call Mesochron frequncy it allows to that every Processor has its own frequncy and it makes wiring of the components  in the chip much easier. The varying frequncy makes it a little bit difficult to communicate to each other because the frequency isnt synchron but they solve this problem by using little Buffer memories.

use this for R600 (including memory power consumption)(so in reality they need a bit more power) shows this result without shrinking to 65nm

R600            => 144Watt

Geforce 8800GTX => 140Watt

Radeon HD 2600XT => instead of 80 Watt only 48Watt (above card already in 65 nm)( with a speed of 1450mhz this card should be double the speed a 8600GTS with a bit more power consumption at least) Geforce 8600GT(S)=> 43Watt(GT) , 71Watt (GTS)

http://www.3dcenter.org/artikel/2007/04-17.php

It would be competitive!!!

The Geforce 8800GTX has something like this what Intel does because of the Shader and Core Frequency is different!!

but Ati dont care about it. thats the thing why a R600 can´t work at 1350mhz so the R600 still rocks in its own way that is the only possibility because the 512bit memory controller (or 1024) isnt a limitation and it isnt a plus to performance in low settings

also we do know nothing about the Graphical quality of the R600

time will tell...  hope for better days...

thanks for reading

R600 is intended for future games


________________

My god the complexity.  I love it! Joygasm!

Unified Shaders? Stream Processors? TMUs?

With all I've been reading It doesn't look good for ATI. I wonder what the benchmarks will say.

Any release date? I wanna read benchmark articles.

:D

~The Dude
________________

You say "...R6xx GPUs can easily be faster in future games like Crysis or Unreal Tournament 2007. This might only happen if these companies decides to go for the complex Shaders all of the time. So far no one made such a Shader so far..."

Of course, the whole point of the R600 is DX10 and games coming "in the future".

So, yes R600 looks less good when tested on old DX8 and DX9 stuff. Who cares? The R600 is plenty fast enough on those old games and benchmarks. Heck, the X1950XT for a lot less money is fast enough for the old games and benchmarks.

I would buy the R600 for the games being released the next two years -- not the games I am already playing adequately with my X1900XT.

Unfortunately, you only have existing DX9 games and benchmarks to use in comparing the R600 with the G8800. Nvidia knows that and always makes a top-end card that does well in "yesterday's benchmarks" -- at the expense of doing well in the games that will be released during the next year.

To put it another way, the R600 may be a bit slower than G8800 in DX9 games, but it is plenty fast enough that I could never SEE the difference (only measure it). However, when the most demanding DX10 games hit -- one will want to have an R600, I think.

The articles I read seem to emphasize the wrong benchmarks (DX9) -- get some DX10 benchmarks and games (that have not been optimized by or for Nvidia -- as most seem to be due to the "way its meant to be played" program).

Last modified on 14 August 2013
Rate this item
(0 votes)