ATI RV770 to have 900 stream processors

Discussion in 'Frontpage news' started by Guru3D News, Jan 3, 2009.

  1. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    Yeaaaa, that's why a 256mm^2 chip beats yours. ROFLMAO @ Nvidia you mean? Quantity = quality @ AMD camp? Nvidia HAS A HUUUGGEE CHIP. 572mm^2!!!. But anyway, the companies use different arcitectures. And both companies have different strategies. AMD = multiple midrange gpus on 1 PCB for high-end, Nvidia = Monolithical chip to beat the rest > works, but with a chip this size, the yields may suffer = high prices/costs. As for AMD, they use a much smaller chip. They can get many chips from a wafer with reasonable yields.

    Edit:

    The extra shaders is obviously a backup = better yields. AMD/ATi is smart enough NOT TO release a card with those unlocked shaders. That would be pointless right? Because the increase would be minimal. We saw the same thing happen @ Nvidia a while back. After 8800 GTX > 8800 GTS!?(almost Faster than GTX) But then !!! 9800 GTX! = Overclocked 8800 GTS. Wierd right!?
     
    Last edited: Jan 4, 2009
  2. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    last time I checked gtx260>>4870 1GB

    Edit: "AMD/ATi is smart enough NOT TO release a card with those unlocked shaders." Thats the thing, theyre not so smart.

    "Technically the activation of the 100 shaders would not provide any advantages in practice except for GPGPU computing."
    Why the hell do they want to do GPGPU computing?? I thought they dont need PhysX
     
    Last edited: Jan 4, 2009
  3. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    Yep, that's right, there is no denying about that. But it wasn't twice as fast :), and the chip was more than twice as big (1 disabled cluster for the Core 216 compared to GTX280). Were I live al the 260's 216's are 30 euros more expensive. So 4870 1GB remains the best buy.

    Edit:

    Not so smart? Please clarify that for me?
     
    Last edited: Jan 4, 2009
  4. Dupoint

    Dupoint Master Guru

    Messages:
    201
    Likes Received:
    0
    GPU:
    Sapphire 4870 X2 2GB
    That statement alone shows who's the "FANBOY".

    Even with the 180 series driver, performance advantages are marginal on most titles, with the exceptions of Crysis Warhead and Dead Space.

    http://www.anandtech.com/video/showdoc.aspx?i=3462&p=4
     

  5. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    Cos theyll release a card with no boost in games. They dont support physx so why do they need the extra sps? Maybe to look even worse than theyre looking now in high end. To beat one card they need to use two on one chip and now add 100sps to do nothing. Its their lost.

    Edit: Anandtech? Are you serious? I hope not, Guru3d is your friend, why not use it?
     
  6. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    Physx as we know it is dieing. Open CL is the way!
     
  7. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    Last edited: Jan 4, 2009
  8. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    O really? Only advertisement. Usable, but marketing. Just like ATi stream btw. DirectX 10.1 AA! That's nice! Tesslation, New directX version supports it! It is a STANDART. PhysX isn't, OpenCL IS, because it is a STANDART. So everyone can use them! But that's some time Q2 2009 maybe.

    Edit:

    Link
    http://www.anandtech.com/video/showdoc.aspx?i=3488

    Believe me, a standart just works better. Everyone wil benefit from standarts like DirectX because everyone is able to use it!
     
    Last edited: Jan 4, 2009
  9. Major Melchett

    Major Melchett Ancient Guru

    Messages:
    2,456
    Likes Received:
    0
    GPU:
    R9 280X @ 1100/1500
    You can't beat him H (it's impossible to beat a fanboy), no point in joining his one man crusade so simply ignore him, or at least take it to PM's or something.
     
  10. Anarion

    Anarion Ancient Guru

    Messages:
    13,605
    Likes Received:
    384
    GPU:
    GeForce RTX 3060 Ti
    PhysX itself is definitely not dying. I bet that not many developers are willing to write their own physics engine from scratch (that's quite obvious, other wise they wouldn't use Havok nor PhysX). It's up to NVIDIA if they want to accelerate PhysX using OpenCL, shouldn't be a big problem for them. In the end there's zero difference for users. PhysX is not the same as CUDA and OpenCL.
     

  11. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    Yea, you are right :). But it was funny while it lasted.
     
  12. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    only advertisement? OMG, you have never ever played any physx game with physx enabled, Mirrors Edge, Bionic commando, Cryostasis anyone?? I played Gears of War with more effects and stable framerate cos of physx, try it, then talk about it...I tried ATI cards and I say no thanks, price/performance is good but I rather pay more for quality, better driver support and physx, then dx 10.1 which is useless, Assassins Creed used it, then it got patched, so no game supports it, why do you need it then? opencl is nvidia supported too so lol
     
  13. The Monkey Frog

    The Monkey Frog Member

    Messages:
    47
    Likes Received:
    0
    GPU:
    Leadtek 9600GT 512MB
    well said.
    but i find PhysX to be hype atm (all the games that use it are spruced up by nvidia) and the best physics engine i have seen has been on crysis.
     
  14. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    "You can't beat him H"
    Yep, you cant beat the truth.
     
  15. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    But like explained in the article, OpenCL is basically an expansion of the "tracks" for the train (gpu). But it doesn't mean developers have to build theyrown physics engine from scratch? OpenCL is a helping hand for developing code for parralel computing. So that means it is possible to divert traffic from physX or Havoc engines to the GPU.
     

  16. The Monkey Frog

    The Monkey Frog Member

    Messages:
    47
    Likes Received:
    0
    GPU:
    Leadtek 9600GT 512MB
    :3eyes:
    On the dx10.1 part ATI are trying to be in the future and keep up with software and API changes.
     
  17. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    But Nvidia does support OPEN CL with CUDA too, so whats the point?

    SIGGRAPH ASIA 2008—SINGAPORE—DECEMBER 9 2008—NVIDIA Corporation today announced its full support for the newly released OpenCL 1.0 specification from the Khronos Group. OpenCL (Open Computing Language) is a new compute API that allows developers to harness the massive parallel computing power of the GPU. The addition of OpenCL is another major milestone in the GPU revolution that gives NVIDIA developers another powerful programming option.

    NVIDIA kicked off the GPU computing revolution with the introduction of NVIDIA® CUDA™, its massively parallel computing ISA and hardware architecture. CUDA was designed to natively support all parallel computing interfaces and will seamlessly run OpenCL. Enabled on over 100 million NVIDIA GPUs, CUDA has unleashed unprecedented performance boosts across a wide range of applications and provides a huge installed base for the deployment of compute applications using OpenCL. With support for other industry standard languages such as C, Java, Fortran and Python, only the CUDA architecture provides developers with a choice of programming environments to aid the rapid development of compute applications on the GPU.

    First introduced with the NVIDIA® GeForce® 8800 GPU and standard across all NVIDIA’s modern GPUs, CUDA is the foundation of NVIDIA’s parallel computing strategy. CUDA has had a tremendous reception from the world’s research community with scientists seeing up to a 20-200x speed-up in their applications with CUDA over a CPU. The CUDA architecture is being built into a wide range of computing systems from supercomputers and workstations to consumer PCs, enabling more than 25,000 developers to actively develop on CUDA today.

    “The arrival of OpenCL is fabulous news for the computing industry and NVIDIA is delighted to be playing a highly active role in the establishment of a new standard that promotes computing on the GPU,” said Manju Hegde, general manager of CUDA at NVIDIA. “We are delighted that Apple has helped spearhead OpenCL. Their recognition that the GPU will now play an essential role in consumer applications is a significant milestone in the history of computing.”

    Vice president of embedded content at NVIDIA, Neil Trevett also holds the position of chair of the OpenCL working group at Khronos.

    “The OpenCL specification is a result of a clearly recognized opportunity from leaders like NVIDIA to grow the total market for heterogeneous parallel computing through an open, cross-platform standard,” said Trevett. “NVIDIA will continue to be very active in the OpenCL working group to drive the evolution of the specification and will support OpenCL on all its platforms, providing developers an additional way to tap into the awesome computational power of our GPUs.”
     
  18. zdrapetr

    zdrapetr Member Guru

    Messages:
    110
    Likes Received:
    0
    GPU:
    Evga GTX 260 55nm
    Yeah like they did with 2900xt and their awesome 320sps and tessalator which ended up way slower than 88gtx.
    They just cant compete with performance compared to Nvidia. Thats why they keep lowering prices so someone can buy them.
     
  19. The Monkey Frog

    The Monkey Frog Member

    Messages:
    47
    Likes Received:
    0
    GPU:
    Leadtek 9600GT 512MB
  20. !H'Guru

    !H'Guru Master Guru

    Messages:
    507
    Likes Received:
    0
    GPU:
    Gigabyte RTX 3090OC
    That's the whole thing, OPEN COMPUTING LANGUAGE. Everyone can use it. Even with a Chrome 400 series card in your PC. But I didn't say Nvidia didn't support it. I said, everything supports it! And that is excactly what consumers want. And that Nvidia is pushing it, is a good thing! But the GPGPU was seen as early as the Ati 5xx cores. Ati was the first to demonstrate GPGPU computing. But all this is a nice development.
     

Share This Page