Quadro FX 5500 1GB

Discussion in 'Videocards - NVIDIA GeForce' started by godling, Dec 29, 2006.

  1. godling

    godling Banned

    Messages:
    17
    Likes Received:
    0
    GPU:
    Dual XFX 8800GTX 768MB (x2)
    Hi,

    I'm wondering if an nVidia Quadro FX 550 is a better card than the
    8800 GTX?

    What are Quadros used for? and why aren't they benchmarked as
    frequently as other GeForce cards?

    I want to buy a quadro card if it's the best. How's the performance?

    Thanks.
     
  2. Insomniac34

    Insomniac34 Ancient Guru

    Messages:
    1,903
    Likes Received:
    0
    GPU:
    GTX 460 768mb (800/1900)
    "The NVIDIA Quadro is a series of AGP and PCI Express graphics cards created by the NVIDIA Corporation. The cards are designed to accelerate CAD (computer-aided design) and DCC (digital content creation) and are usually found in workstations. This is in contrast to the GeForce line which is specifically targeted at computer gaming." - Wikipedia

    In other words, stick with the 8800 GTX. I'd actually be interested in seeing how well these do in games.
     
  3. DaddyD302

    DaddyD302 Guest

    Messages:
    1,574
    Likes Received:
    6
    GPU:
    Aorus Master 4090
    You must be kidding me right? The 8800GTX will smoke that card. Quad cards are meant for professionals, not gamers, that is why you don't see too many reviews on them. They also cost more.
     
  4. Decane

    Decane Ancient Guru

    Messages:
    5,195
    Likes Received:
    21
    GPU:
    GTX 1060 6GB
    Quadros are professional graphics cards. They are not consumer graphics cards, and generally perform much worse in games than their corresponding consumer graphics cards [that is, the consumer graphics cards coinciding with their architecture, for example the G71 based Quadros vs the "real" G71 GPUs: 7900s].
     

  5. godling

    godling Banned

    Messages:
    17
    Likes Received:
    0
    GPU:
    Dual XFX 8800GTX 768MB (x2)
    But how come they cost so much more then? More bucks = sweeter performance right?
     
  6. Clements

    Clements Master Guru

    Messages:
    903
    Likes Received:
    0
    GPU:
    Geforce GTX 670
    You are paying a price premium for the cherry-picked parts, certifications and the specialised workstation-grade drivers with optimisations for CAD/DCC applications. These cards are targeted at professionals and businesses, and not particularly the Average Joe.

    From a gaming-only perspective, a Quadro will perform almost the same performance as it's desktop equivalent but at a much greater price. There is no G80-based Quadro card out yet, so obviously the 8800GTX will be about twice as fast as the G71-based Quadro FX 5500 in games.
     
  7. Recrofne

    Recrofne Ancient Guru

    Messages:
    2,907
    Likes Received:
    1
    GPU:
    GeForce GTX 560 Ti
    They are more geared for professional rendering and have several unique features. Here are a few from the leadtek site.

    Anti-aliased points and lines for wire frame display

    A unique feature of Quadro GPUs is supporting anti-aliased lines in hardware, which has nothing in common with GeForce's full-scene anti-aliasing. It works for lines (but not for shaded polygons) without sacrificing system performance or taking extra video memory for over-sampling. Since this feature is standardized by OpenGL, it is supported by most professional applications.


    OpenGL logic operations

    Another unique feature of Quadro GPUs is supporting OpenGL Logical Operations which can be implemented as the last step in the rendering pipeline before contents is written to the frame buffer. For example workstation applications can use this functionality to mark a selection by a simple XOR function. When this function is done in hardware, such significant performance loss as a GeForce adapter would cause will not happen. OpenGL can be used for either consumer or workstation adapters.

    The most common applications for GeForce adapters are full-screen OpenGL games. CAD applications work with OpenGL windows in combination with 2D-elements.


    Up to eight clip regions (GeForce supports one)

    A typical workstation application contains 3D and 2D elements. And while view ports display window-based OpenGL function, menus, rollups and frames are still 2D elements. They often overlap each other. Depending on how they are handled by the graphics hardware, overlapping windows may noticeably affect visual quality and graphics performance. When windows are not overlapped, the entire contents of the color buffer can be transferred to the frame buffer in a single, continuous rectangular region. However, if windows do overlap, transfer of data from the color buffer to the frame buffer must be broken into a series of smaller, discontinuous rectangular regions. These rectangular regions are referred to as "clip" regions.

    GeForce Hardware supports only one clip region which is sufficient for displaying menus in OpenGL. Quadro GPUs support up to 8 clip regions in hardware, keeping up the performance in normal workflow using CAD/DCC applications.

    Note: I personally notice this all of the time when doing things with my Geforce.


    Hardware accelerated clip planes

    Clip planes allow specific sections of 3D-objects to be displayed so that users can look through the solid objects for visualizing assemblies. For this reason, many professional CAD/DCC applications do provide clip planes. The GPU of the Quadro family supports clip-plane acceleration in hardware - a significant improvement in performance when they are used in professional applications.


    Optimization on Memory usage for multiple graphics windows

    Another feature offered by the GPUs of Quadro family is Quadro memory management optimization, which efficiently allocates and shares memory resources between concurrent graphics windows and applications. In many situations, this feature directly affects application performance and offers considerable benefits over consumer-oriented GeForce GPU family.

    The graphics memory is used for frame buffer, textures, caching and data. NVIDIA's unified memory architecture allocates the memory resources dynamically instead of keeping a fixed size for the frame buffer. Instead of wasting the unused frame buffer memory, UMA (Unified Memory Architecture) allows it to be used for other buffers and textures. When applications require more memory from quad-buffered stereo or full scene anti-aliasing, manage resources efficiently has become a more important issue.

    Click here for the source and more information
     

Share This Page