1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Old interview I once did with Scott Sellers, one of the founders of 3Dfx

Discussion in 'Videocards - 3dfx' started by Christian Klass, Sep 5, 2019.

  1. Christian Klass

    Christian Klass New Member

    Messages:
    5
    Likes Received:
    10
    GPU:
    GeForce GTX 1070
    'When can we expect accelerated ray tracing with a reasonable iteration depth capable of delivering sustained 30fps?' some young nerd had asked him.

    His answer was: 'Don’t hold your breath on this one. Ray tracing algorithms are orders of magnitude more complex than even today’s most advanced hardware. Plus, the ray tracing algorithms have numerous problems themselves that make it very difficult to integrate into an entertainment focused product.'

    It was September 1998. Wait, that was 21 years ago. Wow... It was my younger journalistic self asking Scott Sellers, one of the founders of 3Dfx Interactive (whose assets were bought by Hashtag#Nvidia). His answers to my many questions where very friendly - and very good. He really made an effort. Even today I still feel honored. Strange, how these small things still have an influence.

    Looking back at it I can only say: Man, how the time has passed! What the industry had achieved back then was already amazing, and look where we are now! Can't wait to see what will happen over the next years. :)

    If anyone is interested I can post a longer excerpt of the interview. It was like 10 Word pages full of answers, it's been a while since I read through them. They are not online anymore at https://www.golem.de/9809/1595.html, but I still have the file.

    All the best
    Christian
    (I once founded Golem.de, now I'm working in a GPU-Benchmarking company)
     
  2. insp1re2600

    insp1re2600 Master Guru

    Messages:
    721
    Likes Received:
    233
    GPU:
    RTX 2080TI OC H20
    I'd enjoy reading it
     
    Christian Klass likes this.
  3. CrunchyBiscuit

    CrunchyBiscuit Master Guru

    Messages:
    255
    Likes Received:
    33
    GPU:
    AMD Radeon HD6950 2GB
    Me too!
     
    Christian Klass likes this.
  4. Christian Klass

    Christian Klass New Member

    Messages:
    5
    Likes Received:
    10
    GPU:
    GeForce GTX 1070
    OK. Here you go. Beware, it's a lot :) I have to split it into several postings, as the limit is 20k characters per posting.

    Interview with Scott Sellers, 3Dfx Interactive, as it came from the inbox. It was published in a more polished form during September 1998 on GNN (before it became Golem.de).

    ----- Start Part 1/3 -----

    > > - What's your personal background and what do you do at 3Dfx? How’s the mood at 3Dfx?

    I was originally from SGI working on midrange and entry products. I then moved to a spinoff of SGI called Pellucid. Our goal was to bring cost effective graphics and RISC techniques to the PC market. I believe we were a little ahead of our time. Pellucid was purchased by Media Vision where we continued to develop high performance graphics products. But MediaVision had a different vision of graphics so Gary Tarolli, Ross Smith and myself created 3Dfx to focus on 3D products for entertainment.

    The mood and attitude at 3Dfx is one of the most dynamic I have ever been in. Our goal is to create very high value products that do what we day they do. The graphics market is so full of hype and misdirection, that we have set out to create a solid series of products. There is a lot of satisfaction with our current success and the acceptance by consumers of our product that is tempered with need to move our product line forward.


    > > - Micron/Rendition recently unveiled the "Socket X". Is this also an
    > > option for 3Dfx?

    Socket X is a proposal from a Rendition (recently purchased by memory vendor, Micron) that standardizes the pinout of graphics accelerators that incorporate integrated DRAM. While it’s a worthy goal to standardize the device footprint for PC manufacturers, our great concern is that it hinders innovation in graphics. We are now forced into a specific package where there is no flexibility. Also, the memory requirements of advanced functions like 3D or MPEG (at HDTV resolutions) require ever greater amounts of memory that make integrated memory prohibitive. I expect the required amount of frame buffer memory for performance graphics to be 16MB late this year moving to 32MB next year. Even non-integrated commodity memory is limited to 16Mbit density (or 2MB per chip) at this point. So we need 8 times the memory in a chip that also needs a tremendous number of graphics gates. Our OEMs have not required a SocketX solution at this point and we will continue to monitor demand from OEMs. It seems there is a tremendous mismatch between the goals of SocketX and the available technology.

    > > - With quickly increasing texel and triangle throughputs, memory
    > > bandwidth gets more and more a serious bottleneck. What do you
    >> think about future memory technologies? What about on-chip memory?

    Memory bandwidth is by far the most important determinant of performance. For instance, the original Riva128 supported a 128-bit memory interface running at 100MHz. Voodoo Graphics also supports a 128-bit memory interface running at 50MHz. Yet in all important performance measurements, Voodoo Graphics comes out ahead in the performance race. Its how you use memory, not just clock rate and how many bits wide. The future path of memory is now becoming clear. Synchronous DRAM or SGRAM have now taken over the mainstream graphics market and that will continue into next year. Its not clear at this point when Direct RDRAM will phase into the market and whether it will have a compelling advantage over standard commodity memory types. Again, this is an area we continue to follow closely.


    On-chip memory is one of those technologies that has incredible potential if only the cost could be brought down. With internal memory, we can control the organization of memory and extract the highest performance possible. But with memory requirements dramatically increasing and commodity DRAM prices falling, the cost gap between external and integrated memory has increased. If you look at a standard mid-range graphics board, it has 8MB of frame buffer memory and graphics chip with about 400K gates. That’s 64Mbits of memory and the equivalent of another 1 million transistors. This size of a die would be impossible to manufacture as a cost effective product with today’s process geometry. And that’s just today. 3D will consume an increasing number of gates and with multiple, high resolution buffers, 3D data will easily consume 16MB or even 32MB of memory. On-chip memory has proven to be the most valuable at this point in specialized applications like portable computers and probably soon in very basic systems with simple 3D and no more than 4MB of frame buffer.


    > > - How do you estimate the importance of the AGP bus ? Is it more than a marketing hype?

    The best answer to this question is another question. After more than a year on the market, what game has shown the promise of AGP? The answer is none. The fastest board on the market is still a PCI-based Voodoo2 board. The reason for this is that most games are designed to use local texturing and the bus is simply not the bottleneck in most games. With relatively low triangle rates and simple textures, the PCI bus is fast enough for most current games. The true bottleneck is the ability of the processor to produce triangles and pass these triangles to the graphics chip. The current bus speed is more than adequate. This doesn’t mean that AGP is a bad technology, its just that games that require the bandwidth of AGP have not been designed yet. Also, the hype around texturing out of main memory is definitely marketing hype. If you do the bandwidth calculations for the entire system, the current PC architecture cannot possibly support full performance games with many megabytes of textures per frame. In fact, games that use AGP texturing put more load on the CPU since the CPU and texture collide at main memory. This contention slows the processor.

    > >
    > > - Today there are quite a few companies out there that produce 3D
    > > hardware for the consumer market. Do you think that this number
    > > will decrease in the future? And why do you think you will start to
    > > dominate the market next year?

    There are many companies that produce 3D graphics chips but there are only a handful that have received any degree of attention from game developers. 3Dfx Interactive is the most popular development platform for new games and at E3 (Electronic Entertainment Exposition) in Atlanta we showed over 160 new titles on Voodoo2. Our core business is serving the game playing consumer in both our retail focused products and our new OEM product, Voodoo Banshee. That is what makes our company and our product special. All the other chip companies produce Windows accelerators that support the basic functions of 3D but many games will not run on this hardware. The best example is Final Fantasy VII. This is one of the most popular titles of all time yet it won’t run on nVidia’s new TnT chip because it lacks support for palletized textures. There are several companies having difficulty in the graphics market and it does point to the fact that if you don’t have special skill or advantage, the current market is going to be very difficult.

    > >

    > > - Today, consumer 3D accelerators are mainly used for games. Do you

    > > think that will change in the future? Is 3Dfx also working with

    > > developers outside the game industry?

    We are working with developers outside the game industry but the main application for 3D is 3D games. Applications like Microsoft’s Chrome will drive more mainstream use of 3D but the real “killer” application outside games has yet to emerge. 3Dfx produced a technology demo showing our Web site implemented as a 3D environment. Navigating data or the Web is a very good application for 3D but Web browsers and Web data that support 3D navigation is going to take some time to develop.

    > >
    > > - Currently the worst problem of high end 3D accelerators are slow
    > > CPUs which can't feed the accelerators with data fast enough. Do you
    > > expect the CPUs to catch up to the accelerators sooner or later or
    > > do you think the chip manufacturers have to solve the problem
    > > themselves?

    Our goal is to always introduce graphics accelerators that are ahead of current CPUs. That way when you buy one of our 3D accelerators, the graphics accelerator will keep pace with the growth in CPU power. The best example is Voodoo Graphics. Voodoo Graphics was introduced when the Pentium 90 was the mainstream CPU. We kept pace with the CPU through the Pentium II. As the CPU got faster, the game got faster because the graphics engine did not limit performance. Some part of the system is going to be the bottleneck since each has its performance level. Our strategy has been to produce very graphics that runs a large library of games. This strategy allows us to produce one graphics engine that will offer competitive performance over long time period. Voodoo Graphics is still selling in the stores after two years on the market and it is still one of the best game cards on the market.


    > > How much of a performance gain do you get from supporting 3D specific instructions like 3DNow or KNI?

    The specific instructions are a great advance in the development of the CPU. Rather than ignore an important fundamental application like 3D, CPU developers are recognizing that the CPU must adapt the needs of the application. 3Dnow in particular has shown performance gains in the 25% range over applications running on a general purpose floating point unit. The added benefit is that these CPUs tend to be less expensive than the Intel equivalent at the same clock speed.

    What about geometry acceleration?

    This is an interesting area that we continue to track. The problem with geometry acceleration is pointed out by your previous question. Intel, AMD and others have very aggressively pushed the state of the art in CPUs including designing in specific support for 3D geometry calculations. We would have to keep pace with the CPU designers. You will see support for more of the 3D pipeline but we’ll have to wait to talk about the specifics.

    > >
    > > - What do you think about supporting procedural textures and texture
    > > manipulation (like spherical projection or noise functions before
    > > applying the texture to a flat surface) in hardware? How about
    > > 'procedural fog' for a better simulation of rain, snow and similar
    > > 'noisy' effects? The Savage 3D already seems to have hardware
    > > support for these features, BTW.

    I am not aware of any hardware providing direct support for the functions you describe. There are many texturing effects, particularly multipass effects, that make these effects easier. For instance, our fog table allows fog zones and increasing and decreasing fog. The table can also be reprogrammed on a frame by frame basis with very little overhead. Fast hardware and good connection to the CPU can produce these effects. Our goal in designing hardware is to not make too many decisions for the developer. We give the developer a set of basic tools upon which they build graphics effects. Its always sexier in the press to state that so and so chip has integrated blocks for doing every high level function but you then force the developer into one development model. 3D graphics and 3D games are all about creating new and wonderful effects, not showing off hardware integration.

    I urge you to cast a very skeptical eye on many of the features that Windows accelerator companies claim to have "integrated” into their chip. In most cases, this integration is actually minimal support or a “redefinition” of the feature. Trilinear filtering is the best case. Any chip doing LOD dithering claims trilinear filtering. We used to call it LOD dithering since there is a difference between the two. But a well known chip company “redefined” trilinear to include the LOD dithering which is a poor approximation of trilinear.

    > > - A lot of people seem to have temperature related problems even
    > > with one Voodoo 2 card (not overclocked). When Voodoo 2 was released
    > > you stated the chips wouldn't run as hot as the original Voodoo chips
    > > and so there wouldn't be as much heat related problems. What kind of
    > > reliability test did you perform prior to the release of the reference
    > > boards? Did you test the cards in some of nowadays cheap mini and
    > > midi > > tower cases which seem to be designed to rather look cool than
    > > keep their inside cool?

    I am not aware of the problems you describe. The Voodoo2 components are tested at temperatures that exceed those possible inside a standard PC and we test specially manufactured Voodoo2’s that span the range of manufacturing variations. As we have always stated, boards with Voodoo2 must use high quality memory and if there is a problem with the memory components on the board, it will appear as a problem with the graphics.

    > >
    > > - You stated that you expect the next big technological advancements
    > > on the pixel level, esp. anti aliasing without performance loss and
    > > high quality lighting in hardware. You also seem to expect the triangle
    > > count to increase dramatically in the near future.
    > > - All publically known algorithms require either a lot of accelerator
    > > power and memory (e.g. the super-sampling method the PowerVR SG
    > > uses)
    > > or a lot of CPU power and a fitting application design (like the edge
    > > anti aliasing supported by Voodoo and Voodoo 2 which often causes
    > > ugly visual artifacts if used on transparent/translucent polygons). So,
    > > if your goal is anti aliasing without performance penalty you
    >> either have to develop a new algorithm without the current drawbacks or
    > > you'd have to put a lot of transistors on a chip which are useless
    > > if you turn off AA but at least don't get in the way (not very
    > > economical). Could you please explain the the current algorithms
    > > used for AA to our readers who are not familiar with the term and can
    > > you give us a more detailed hint what you are really up to?

    There are two main methods used in standard graphics hardware. Supersampling you mentioned earlier involves rendering a scene at very high resolutions and then filtering the result to a lower resolution. I also hesitate to give the PowerVR chip credit since this NEC and Videologic have once again promised hardware and then failed to deliver. This method is also not limited to the PowerVR architecture. The drawback with this method is the tremendous memory requirement of the supersampling method; at minimum 4 times the memory. The minimum method, for instance, of rendering at 1600x1200 and then filtering to 800x600 gives on 4 super samples which is not very high quality. The other common method involves drawing the entire scene normally without doing any special graphics processing until the very last step. After the scene is fully rendered, the software detects edges in the scene and performs a blend to remove the aliasing artifacts. The drawback to this method is that it requires additional CPU overhead and can cause artifacts on alpha blended triangles. The benefit is that this technique works at any resolution without the high memory penalty of supersampling. In Voodoo Banshee, we can do both methods but at this point, frame rate is the goal and each causes a slow down in game performance. As for what we are really up to, we are going to provide true full scene antialiasing. The details are going to have to wait. Sorry.

    > >
    > > - You said the currently supported shading methods are not very useful
    > > for creating realistic images forcing the developers to implement
    > > light map based approximations. There are more accurate methods
    > > for this like Phong shading but these are costly to implement in
    > > hardware and require additional steps in the geometry setup stage which is
    > > not accelerated by the current crop of 3D cards (except the Conspiracy).
    > > Can you give us a more detailed description of what you meant when
    > > you were talking about pixel oriented lighting? Are you planning to
    > > address this topic with your next generation chipset? And even if
    > > you release a chip with photorealistic lighting capabilities do you
    > > expect the developers to use it if they have to implement light map based
    >> lighting to support other accelerators? Since you expect the triangle
    > > counts to increase a lot don't you think the primitive shading
    > > methods are sufficient once triangles are small enough (a 640x480 game a
    > > 60fps with a pixel coverage of 5 per triangle would require 3.7 million
    > > triangles per second which is below the peak rate of the Voodoo 2
    > > and other upcoming accelerators)? Wouldn't it make more sense to move
    > > the geometry setup to the 3D chips and to start supporting algorithmic
    > > object and surface descriptions?

    The problem with the current lighting models present in Direct3D and OpenGL is that they were developed primarily for the professional CAD markets, and are not necessarily appropriate for lighting in games which are more focused on “special” effects. Lighting is certainly the most compute intensive of the top part of the geometry pipeline, and we’re always investigating ways to offload more and more of the CPU requirements down to the graphics chip.

    > > - When can we expect accelerated ray tracing with a reasonable
    > > iteration depth capable of delivering sustained 30fps? 8)

    Don’t hold your breath on this one. Ray tracing algorithms are orders of magnitude more complex than even today’s most advanced hardware. Plus, the ray tracing algorithms have numerous problems themselves that make it very difficult to integrate into a entertainment focused product.

    > > - Would it have been possible to integrate the Voodoo 2 chipset into one
    > > chip? Since the Banshee integrates the Voodoo 2 3D core with a
    > > sophisticated 2D solution wouldn't it be possible to replace the 2D
    > > part with a second texture unit? Wouldn't this allow much cheaper
    > > and smaller Voodoo 2 and affordable single card Voodoo 2 SLI boards?

    The problem with integration is that cost curves in semiconductor manufacturing are not linear. A chip that is twice the size is not simply double cost. It is 3, 4 or 5 times the cost. The other challenge with integration in this product is the number of pins. With Voodoo Banshee we moved to a BGA package to accommodate all the pins. When Voodoo2 was introduced, BGA packages were not as economical and, even today, there is a cost difference between a PQFP and BGA. While its true that single package solutions can be cheaper, again you can look at Voodoo Graphics. This board is now priced below $100US even though the graphics chipset is two chips. The most important factor is not how many packages, but what is the most cost effective technology choice at the time the product in introduced and through its life.

    ---- End of Part 1/3 ----
     
    CrunchyBiscuit likes this.

  5. Christian Klass

    Christian Klass New Member

    Messages:
    5
    Likes Received:
    10
    GPU:
    GeForce GTX 1070
    Interview with Scott Sellers, 3Dfx Interactive, in September 1998.

    ---- Start Part 2/3 ----
    > > - Do you plan to support anisotropic filtering in the future? Unreal
    > > for example shows a lot of texture aliasing on the ground of large
    > > outdoor areas even if you force Glide to use its semi-trilinear filtering.
    > > - You (3Dfx) seem to believe that the quite popular region based
    > > rendering (tile based like the PowerVR family and the Savage 3D or scanline
    > > based like the Oak Warp 5 (RIP)) are not a good idea because of the
    >> setup overhead introduced by having to sort the polygons into the
    > > regions. At least one upcoming chip (according to its more or less rumored
    > > specs) has moved the sorting to the accelerator, making the tile based
    > > architecture transparent for the CPU. Regarding this, why do you
    > > still think the region based architecture is inferior to the conventional
    > > method?

    Let’s address some competitive mis-information first. To our knowledge, the Savage3D is not a tile renderer in the same sense as PowerVR. Savage3D is a standard immediate mode rendered that has a tile based organization of frame buffer memory. We have supported this since the beginning in Voodoo Graphics and it’s a common technique used in high end 3D hardware. Second, tile based rendering technique is not that popular since its only used in two architectures, the PowerVR and Oak chip, that have both failed to keep up with immediate mode renders. PCX2 has never managed to catch Voodoo Graphics in performance. All the other mainstream graphics chips in the PC graphics world are immediate mode triangle renderers. I would say that tile based schemes are fairly unpopular. As triangle counts increase, tile based renderers, which have very high fill rate once the triangles are sorted, encounter corner cases that don’t take advantage of their strength in fill rate. One of the best tests for the viability of various rendering methods is to look at high end hardware. Vendors with tile based renderers have predicted that this method of rendering with eventually dominate 3D graphics. Yet, no high end graphics hardware uses this method. Why? For general purpose use over a wide range of applications, immediate mode triangle-based rendering has proven to be the most effective technology.

    > > - Chips like the Voodoo (2) render a triangle as soon as they receive
    > > the data describing it. Others have to accumulate the whole scene data
    > > and can start rendering only after receiving the according command.
    > > Which method will be better in the future?

    This is the method required by tile based renderers in particular that need all the information in a scene before starting the first triangle. Whether this is used or not depends on the underlying rendering technology so I think the previous answer covers this.

    > > - There is a way to achieve an anisotropic filtering effect using
    > > multipass texturing.
    > > The Voodoo 2 is designed to do multipass texturing in a single pass
    > > but you don't advertise anisotropic filtering as a feature. Why? The embossing
    > > bump mapping technique is advertised.

    You are now seeing the effect of the PC market on the truth in advertising in the PC market. Most features that Windows accelerator companies claim are in many cases only minimally supported by the hardware. Antialiasing is probably the best example. According to the competitive rules these days, if you have an alpha blender then your chip does antialiasing. Our early marketing and chip documentation only advertised features that were actually integrated into the chip or made possible by something special inside the chip. Although many features were possible, some features could only be approximated by Voodoo Graphics or Voodoo2, so they weren’t listed. Since the rules have changed in the market, we list all features that are possible. Anisotropic filtering is listed under the features for Voodoo Banshee even though the degree of support is the same in all our chips. Its actually a real shame because consumers are the losers in these cases. A consumer makes a decision based on features advertised by a vendor or listed on a box only to find out later that the feature is a “marketing” feature that is only minimally supported are not even useful. The most common deception is to list a feature that is too slow to use, yet it is listed because its supported. We take great pains to point out what is a single pass versus two pass operation. Others don’t.

    > >
    > > - Is it true that you are going to sell 'naked' Banshee boards? If
    > > yes,
    > > what is the reasoning behind this?
    3Dfx does not supply board level product. This would conflict with our strategy of using board manufacturers to supply boards. We currently do not have plans to enter this type of business although with the rapid change in the market, anything is possible.
    > >
    > > - Are you going to continue to release evolutionary products derived
    > > from the Voodoo architecture or is there going to be a true next
    > > generation designed from the ground up?

    You will see us deliver both types of products although evolutionary products will be concentrated in OEM-style products. Discussion of our next generation is going to have to wait until we are closer to completion.

    > > - Will there be another 3D only product from 3Dfx after the Voodoo 2
    > > or
    > > will it be 2D/3D only from now on?

    At this point, the overhead of including the 2D engine is minimal so you will see fully integrated products in the future. But our 2D/3D products will work in 3D-only mode with the VGA disabled. Voodoo Banshee is capable of being a 3D only, or more accurately, a non-VGA device. The 2D accelerated functions are still available. Game developers have also told us that 2D functions are still very useful.

    > > - It looks like Banshee will be your first successfull part for the OEM
    > > market. According to the conference call transscript on www.agn3d.com
    > > you are going to release a enhanced version of Banshee end of Q1 and
    > > a Banshee successor sometime after that which is going to be OEM only.
    > > Does this mean that you are splitting your product line in Voodoo type
    > > high end game accelerators for the retail market and Banshee type
    > > middle/upper end accelerator for the OEM market? If so, are you
    > > going to release a new product for each product line every six month or will it be
    > > only one product every six months? Or did you replace the six month
    > > schedule with an even tighter one?

    Our product plan has always been the same. Voodoo Graphics and its upgrade, Voodoo2, are the foundation products that provided the highest performance game solution in their times. We broke the OEM rules on these products in order to attain the highest performance. These products were not for Compaq or Gateway, they were for the gamer. Voodoo Graphics and Voodoo2 are primarily sold through retail and through system integrators (sometimes called dealers or build to order in Europe). The goal of these products was to give the developer a high performance standard solution to develop the next generation of content. We then take the high performance core that is now very well supported by game manufacturers into lower and lower cost products. One of the markets is the OEM Windows accelerator market. Our goal at the game accelerator product is to make each product have a 12-18 month lifetime before an upgrade or a brand new pipeline. Voodoo2 content is only just starting to emerge so gamers who bought Voodoo2 have made an investment for these games. We don’t just introduce replacement products to make people buy the newest product like is standard in the PC market. Gamers who bought Voodoo Graphics as far back as late 1996 are still playing and enjoying games like Unreal or Quake II. They are playing these games faster than any other technology. It was a good investment. Voodoo2 is better, but its not required. In the OEM market, these vendors require products to refresh about every 6 months or so. These are the rules set down by the market so that’s what you’ll see us deliver.

    > > - Is it true that the main reason for writing Glide was Microsofts
    > > inability to finish a well working version of Direct3D on time? How
    > > do you see the future of Glide? When do you expect the Glide 3.0 SDK to
    > > be released to the public? Can you comment on DirectX 6 yet? Does it get
    > > closer to Glide performancewise? What are the benefits of current and upcoming
    > > releases of Glide compared to OpenGL or Direct3D 6.0 ? Do you think it is
    > > possible that Glide and the need for compatibility with older versions of
    > > the lib and the hardware will become a burden rather than an advantage
    > > some day in the future?

    It is not true that Glide was introduced because of Direct3D’s schedule. Glide was under development and was delivered to developers in 1995. Let me first define Glide. Glide is a very thin software layer that sits between the chip and the game. It is not intended to be portable across other hardware because it makes very detailed assumptions about the behavior of the underlying graphics chip architecture. Glide exposes all of the capabilities of the Voodoo architecture in a very basic fashion. The goal of Glide was to provide at least a minimal insulation layer between the game software and the graphics chip. Rather than have each developer reverse engineer the registers in the chip, we provided a software layer. This is a very different philosophy and purpose than Direct3D that is intended to work across multiple graphics chips and hide chip specific features that hinders portability of games. 3Dfx is also not just in the PC market. Our arcade vendors use Glide ported to their base computer as the basis for their graphics interface. Glide is a portable environment and has been implemented on MIPS, PowerPC and Hitachi CPUs. Let me wrap up some of the other questions in a simple answer. Our goal is to provide the fastest and most innovative graphics hardware for games. Glide is one strategy to do this. Direct3D and OpenGL are other directions that we also support. But for the developer that wants the most exposure to the hardware and wants to push the edge in the use of 3Dfx graphics, Glide is as close to the hardware as you can get. One burden is compatibility, but we have decided that enabling innovation is worth the cost of supporting compatibility.

    As for the public release of Glide 3.0, we don’t have an exact date for public release. Developers have been provided with the alpha and beta versions.

    > > - Some current or upcoming chips (i740, RivaTNT) determine which
    > > parts of a texture are required to cover a given polygon and only retrieve
    > > only these parts from local or AGP memory (kind of a light version of
    > > deferred texuring). What do you think about this approach? Do you
    > > plan to incorporate a similar technique in your future chipsets?

    To date, this technique has not proven to be faster than chips that pull an entire texture and buffer the texture in off screen memory. There are a number of reasons for this not the least of which is that main memory bandwidth is about 25% of that available relative to local texturing. We will support execute mode in future graphics chips but we really only see this technique being useful with DRDRAM and 4X AGP.

    > > - What do you think about John Carmacks remarks that a 16bit
    > > framebuffer
    > > will cause image quality trouble in upcoming games that use
    > massive
    > > multipass rendering techniques? The multitexture patch for Unreal
    > > already results in better image quality due to the Voodoo (2)s
    > > internal
    > > 32bit datapath. Do you plan to support full 32bit rendering
    > > (including
    > > the frame buffer) in the future? Also, do you think that a 16bit
    > > Z-Buffer
    > > might not be sufficient for an artifact free image when geometry
    > > complexity
    > > goes up over the next two years?

    No doubt that 3D game technology is going to move beyond 16-bit color and 16-bit Z-buffering. But this won’t happen overnight because game developers made decisions on game support 6 or 12 or even 18 months ago. Some games may look forward, but most use the reference hardware of the day for creating the game. The current reference hardware is Voodoo Graphics or Voodoo2. There is one market fact that doesn’t support the assumption in your first question. The overwhelming majority of graphics hardware in the hands of gamers is 16-bit color and 16-bit Z. Your assumption is that the game developers are going to instantly shift to a technology that is not supported by current hardware. This simply can’t happen. The move to 32-bit color and deeper Z-depth is going to take some time but it is coming. And yes, we are moving to 32-bit color and deeper Z-depth. The ability to support these additional bits has more to do with memory bandwidth that the choice of the graphics designer. And I urge you to carefully look at the current claims of vendors that are claiming both Voodoo2 performance and 32-bit rendering. You’ll find that the 32-bit rendering is half or less than half the peak rendering speed which they compare against Voodoo2. Again, the PC market is only out to obsolete your old hardware and get you to upgrade. These new features will come, but make sure the are useful in the product you buy. Also, anyone can make a great demo that takes advantage of the features that run fast, but developers don’t write demos. They write real games that use the hardware in very arbitrary ways.

    > > - What can be done to create more lifelike images of organic materials
    > > and
    > > landscapes? The current polygon based methods leave a lot to be
    > > desired in both areas. Do you think it is possible to support some type of
    > > voxel based rendering to create more realistic looking landscapes?

    We are always investigating novel ways to accelerate models which are difficult to model polygonally. That's all we can say for now…

    > > - You have teamed up with a company which offers a simulation
    > solution
    > > for
    > > chip development due to the quality of the simulation software and
    > > the
    > > offered service. What kind of service does this company offer? Do
    > > some
    > > of their employees work at 3Dfx like yours do with the game
    ⮚ > developers?

    At this point, I can’t comment on our design methodology. This is an area of intense competition and we have developed one of the best methodologies in the industry.

    > > - What do you think about Intel integrating a 3D solution directly in
    > > an upcoming P II chipset do you think this will be a problem for 3Dfx
    > > in the long run?

    This is a logical evolution of the PC. We expected this move and, no, we don’t expect it to affect our current position in the market. Our position is not just one of providing graphics functions. Its in providing a standard game platform through our high end retail products and then introducing mainstream products that take advantage of the software titles. The low end integration market uses slower CPUs and is designed entirely for cost. To date, this type of solution still can’t match the game performance of Pentiums sold last year.

    > > - Do you expect the 3D APIs to move away from working as close to
    > the
    > > metal as possible (like D3D DrawPrimitives) towards more abstract
    > > object
    > > oriented principles (like D3D retained mode with support for
    > > mechanisms
    > > like progressive meshes) in the future?

    The answer to this question lies with the developer. There is a myth that you can market your solution to the developer and they will choose your software. In reality, the developer decides on the API during the early stages of design and its difficult to change their approach. If developers choose to stay at the metal, then approaches like DrawPrimitive and Glide will remain important factors in game design. If the market moves to higher level primitives, lower level approaches will shrink in importance. My opinion is that leading developers will stay at the metal since it gives the most flexibility and allows them to innovate. So the market is going to stay and mix of approaches determined by the nature of the game and the developer.,

    > > - Do you have any explanation regarding what happens to your stock?
    > > You
    > > recently announced higher than expected Q2 results and your stock
    > > fell.
    > > Looking back at the rollercoaster ride of your stock value do you
    > > still
    > > think going public was a good idea?
    This is an area that cannot be discussed in this type of forum. As you stated, 3Dfx has, in the past, met or exceeded expectations. Our focus is on running the company and meeting our goals. Our philosophy is quite simple: execute well and the stock will take care of itself in the long run.
    ---- End Part 2/3 ----
     
    CrunchyBiscuit likes this.
  6. Christian Klass

    Christian Klass New Member

    Messages:
    5
    Likes Received:
    10
    GPU:
    GeForce GTX 1070
    Interview with Scott Sellers, 3Dfx Interactive, in September 1998.

    ---- Start Part 3/3 ----
    > > - Since you seem to switch to a rather aggressive marketing (why,
    > > BTW?),
    > > do you have any comments regarding the upcoming Savage3D?

    I think you have answered this question in some of your other questions. 3Dfx has consistently delivered on its product promises and provided a very high value product. The Voodoo family of game accelerators is one that we are very proud of since its become the de facto standard for PC gaming. Our strategy was to design a quality product, have it well supported by game developers and deliver this to the market. No hype, just a solid product. Enter the Windows accelerator companies. From NEC/Videologic’s revoloution campaign to Matrox’s advertising about the Mystique, consumers are now bombarded with messages that position standard Windows accelerators, which lack the performance and game support of Voodoo, as the best game card possible. We must maintain our position. Your own questions are a good example. S3 has called Savage3D a tile based rendered which it is not in the sense that the industry has used the term in the past. S3 claims support for algorithmic textures mentioned in a previous question and, although we have not seen the chip itself, we doubt that the hardware support is as extensive as claimed. S3 touts the advantage of its texture compression which is required to meet their performance claims. This texture compression also has serious artifacts in certain cases. Our texture compression has been extensively used in arcade games. The greatest hindrance to texture compression has been the lack of support in Direct3D although this is now available in DirectX 6.0. Our texture compression is equally available now and I believe our technology is much more solid. Savage3D is also a new chip with no backward compatibility with previous products so you can guess the degree of game support. No doubt, Windows accelerator products are getting faster 3D, but if all the games don’t run on the chip and 6 months from now the chip is obsolete, is this really a high quality game product? Its like SONY or Nintendo replacing their console hardware with incompatible hardware every 6 months. They would die. Savage3D is a capable low end OEM chip. It’s a 64-bit chip with a maximum of 8MB frame buffer. The high performance market is 128-bit with 16MB of frame buffer. Its not a great game chip.

    > > - Agressive marketing often incorporates the use of arguments which
    > > stretch the truth a bit (like using peak numbers as base for real
    > > world estimates, etc.). Aren't you concerned about losing the trust of
    > > your customers?

    Wherever possible we try to clarify what we mean when using 3D terms. The press will sometimes translate these terms to the current industry standard, but we will not try to intentionally mislead. If you ask for clarification on a term, we use the industry standard terms. Terms like trilinear involve a blend, not just a dither. But we have to use the industry terms at this point. It’s a fine line and we make sure game customers really understand the terms we use. Its sometimes very difficult since the volume level of misinformation is getting very high. One example I wish to point out is the TnT hype. NVidia claimed that they would “leapfrog” Voodoo2. Performance results coming not indicate that TnT and Banshee are very similar in performance and Voodoo2 is clearly the superior technology. TnT doesn’t even play all the games. Yet the TnT hype was widely reported and widely believed because nVidia created a great technology story. But when it is delivered, these promises haven’t been met. If you bought Voodoo2, you have the best game technology in the PC market. Another example is NEC/Videologic. Where is it? If you waited, you waited in vain because to get the best games for this Christmas, developers need final hardware now. Everyone has ours.

    > > - S3 is currently pushing their texture compression feature. What do
    > > you
    > > think about texture compression in general? You also support a
    > > texture
    > > compression algorithm but it seems that your narrow channel
    > > compression
    > > seems to be rather unpopular among developers (at least noone
    > > advertises
    > > the use of more texture due to compression). Im i right about that
    > > and
    > > if so, why? (And is the description in the Glide manual correct?
    > > It seems that the example yields incorrect results.)

    Texture compression is a great solution to the goal of increasing texture detail without performance loss. S3 is pushing their technology for PR benefit and because its required for their architecture to attain the same performance as 128-bit chips like Voodoo Banshee. Savage3D is a 64-bit device with half the bandwidth of Voodoo Banshee. If they use compressed textures, they get the same performance as Voodoo Banshee without compressed textures. Using compressed textures on Banshee again doubles our performance leaving Savage3D behind. Also, the S3 texture compression method has some serious artifacts. Our texture compression method has been used extensively in the arcade market. Williams’ NFL Blitz, SF Rush and Mace games all used compressed textures. The biggest impediment to widespread use of compressed textures has been lack of support in Direct3D. This is now fixed with DirectX 6.0 and our compression method is equally supported in Direct3D. I think you’ll see increased use of compressed textures and the use of much better algorithms than the S3 algorithm.

    > > - A lot of people expect the RivaTNT to be the new performance
    > leader
    > > when it is released mid/end august. You seem to disagree. Why? If
    > > your
    > > judgement is based on the currently released benchmarks, isn't
    > > this a bit unfair regarding the early state of the hardware?

    TnT was announced before it was even a prototype. The PR hyped two main themes that we believe will not come true as the first product ships. First, TnT is supposed to have 250Mpixel performance and generate 8 million triangles per second. According to our information, neither is going to come true. Our sources indicate that TnT will have be less than 200Mpixels and support about 1.3 million real, measurable triangles. We have shown 4 million triangles on Voodoo2 and there is no dispute that Voodoo2 with SLI is already a 180 Mpixel engine. Overclocked, its higher. Additionally, nVidia claimed that TnT is a “leapfrog” over Voodoo2 implying that TnT will have much greater overall performance and set the new performance level. Much as Voodoo2 leapt over all current 3D accelerators. My question to you is do you believe TnT will have the same gap over Voodoo2 that Voodoo2 had over the rest of its competition? The clear answer is no, so nVidia’s claims are not accurate. And we’ve had indications that TnT is about the same performance as Banshee and less than Voodoo2. I am sure there will be isolated games that show better performance, but overall, Voodoo2 is clearly a better game card with better performance and far better game support.

    > > - Which is the best benchmark to indicate performance for future
    > > games?
    GameGauge is one measure or just measure the performance of a series of games. That’s the true measure. Winbench is now so hacked and distorted that it’s a very poor measure of the true performance particularly in the high performance accelerators. Also, look at the degree of game support. Without all the games, performance means nothing.

    > > - What resources does a Banshee based card require (Interupts, DMA
    > > channels, memory regions, fuel, whiskey, etc)?

    Banshee is a standard PCI or AGP device and requires standard resources. Because it’s a VGA device, it does require an interrupt due for legacy compatibility. We do not use DMA for the PCI device, but we do use AGP system memory accesses for the AGP device.

    > > - How is texture memory handled on the Banshee boards? Currently a
    > > programmer only has to check the available texture memory and the
    > > number of TMUs and make sure no texture crosses a 2MB border. With
    > > Banshee the texture memory is no longer fixed. Do you expect this to cause
    > > problems with older Glide applications?

    Absolutely not. We are very serious about backwards compatibility, and Banshee has received significantly more backwards compatibility testing than we’ve ever done for any product released to date.

    > > - Banshee supports single cycle trilinear filtering according to the
    > > spec
    > > sheet. Is this true trilinear filtering or the mip map dithering
    > > method
    > > you originally called advanced filtering (which costs some
    > > performance
    > > on Voodoo and Voodoo 2)?

    This method is MIP map dithering. Again, the market definition of trilinear filtering has been warped to include MIP map dithering. We remain competitive by using this definition. Again, I don’t like it but we have competitive pressures.

    > > - In a recent update on the OpenGL port of Unreal Tim Sweeny stated that
    > > cards that rely heavily on on-chip texture caches like the Riva128 or
    > > i740 might not be able to perform well in upcoming games using large
    > > quantities of hires high detail textures. According to the published
    > > specs Banshee also uses an on-chip texture cache unit. How will this
    > > affect Banshees performance for games like Unreal? Do you think this
    > > will effectively kill the idea of DME texturing?

    Voodoo Banshee has an on chip texture cache to smooth the effects of using SGRAM or SDRAM with local texturing. This is not the same use as the Riva128 or i740 that pull textures directly from main memory and use the texture cache as the sole local texture source. We have a much greater pool of local textures so we don’t anticipate that Banshee will be the bottleneck. DMA texturing has not proven itself as a technology so I don’t think its even reached the status of a real solution. Maybe with 4X and DRDRAM main memory.

    > > - Do Voodoo and Voodoo 2 texture directly from their on-board texture
    > > memory or do they also use a texture cache?

    Voodoo Graphics and Voodoo2 are built around direct access to textures in high speed EDO memory. With this architecture a texture cache is not necessary because we can pull textures from arbitrary locations at full pipeline speed.

    > > - What color depths does Banshee support in 2D? Only 16 and 32 bit or
    > > can it also do true 24 bit?

    Voodoo Banshee supports packed 24-bit so we can display true color at 1280x1024 in 4MB of frame buffer memory. It also supports true 32-bit color modes up to resolutions only limited by the total available memory (16 Mbytes max.).

    > > - Do Banshees advanced 2D capabilities make any difference for everyday
    > > use? In what applications will the difference really show? Likely
    > > most new chips will be able to do vertex based 2D polygon rendering so
    > > the biggest difference will be the full GDI support in hardware. Where
    > > does this make a difference?

    The biggest benefit will occur at higher resolutions. At 1600x1200, standard screen objects consume a huge amount of memory. Creating and moving these objects requires a faster engine. At low resolutions, current 2D accelerators are fast enough. But in the OEM market, the fastest product wins the reviews and earns greater sales.

    > > - Who many transistors do you use for a Banshee and how much for a
    > > Voodoo 2 chipset?

    We do not publicly disclose our transistor counts or the costs for our chips and chipsets.

    > > - What standard features do you expect to see in 3D accelerators by
    > > the
    > > end of 1998? And by the end of 1999? Not emulated ones like the
    > > multi-pass
    > > bump mapping but those really in the metal.

    Well, as 1998 is almost to an end anyway, I think you’ll see multi-texturing capabilities as standard, as well as renders which can achieve close to 100 Mpixels/sec sustained with all the features turned on. For 1999, look forward to significantly higher fill rates and triangle rates, and, perhaps more importantly, significantly better quality of the rendered images.

    > > - Imagine you just had designed the mother of all 3D accelerators that
    > > would make any further development obsolete. What would you do with
    > > it? Would you try to sell it or would you bury the plans as deep as possible?

    We’d sell it in a second. This is actually a pretty outrageous question because one of the things I have learned in technology is that there is no magic. The 3D field has been around for a while with many brilliant people looking at various techniques. Advances will occur, but magic is very rare. The solution you describe sounds beyond magic.

    > > - So far, you have been a very 'open' company (mailing questions to
    > > employees
    > > and even the bosses usually resulted in competent answers). Since
    > > you're
    > > growing at a fast rate will this style remain the same or are you
    > > going to
    > > adapt
    > > the 'Please only bother the PR people who are paid for this' style
    > > of most larger companies?

    I will have to refer this to our PR people. They will answer at their earliest convenience. ☺ Seriously, we make every attempt to stay in close contact with the end users of our product. Obviously, the easiest way to do this for us is to stay very close to the newsgroups and the web pages on the internet.

    > > - In the conference call you said you are preparing a press release
    > > about an
    > > exciting
    > > new product for the holiday season. If you supply us with all the
    > > necessary details
    > > we'll do the work of writing and publishing the release for you. No
    > > charge. Do we
    > > have a deal? 8)

    I’ll have to get back to you on this.

    > > Thank you very much for enduring this really long interview. We'll
    > > inform
    > > you
    > > when it's translated and published!
    > >
    > > Regards,
    > >
    > > Christian Klass of Golem Network News (GNN)
    > >
    ----- End Part 3/3 -----
     
  7. CrunchyBiscuit

    CrunchyBiscuit Master Guru

    Messages:
    255
    Likes Received:
    33
    GPU:
    AMD Radeon HD6950 2GB
    Thanks for posting! Was a good read.
     
    Christian Klass likes this.
  8. Christian Klass

    Christian Klass New Member

    Messages:
    5
    Likes Received:
    10
    GPU:
    GeForce GTX 1070
    Happy you liked it. Every now and then I stumbled across the old file where I had saved the interview. Sadly some other stuff was lost during the years.
     

Share This Page