1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

‘Vista blows’ says DirectX creator

Discussion in 'Frontpage news' started by iako, Mar 25, 2008.

  1. Lex Luthor

    Lex Luthor Master Guru

    Messages:
    737
    Likes Received:
    308
    GPU:
    MSI GTX275 TwinFrozr
    The REAL point is... IT DOESN"T MATTER!

    XP is almost history and nothing any of us can do will change that.

    So you better get Vista running good because you will just be hurting yourself if you don't.

    Lex
    (Vista allowed me to kick Superman's butt!)
     
  2. lmimmfn

    lmimmfn Ancient Guru

    Messages:
    10,347
    Likes Received:
    51
    GPU:
    AorusXtreme 1080Ti
    lol, XP wont be history for a long time much as M$ would love it to just die. If i was buying a new PC id get Vista with it instead of XP, but as i have both and a choice, XP is all i need for the moment
     
  3. Eat The Rich

    Eat The Rich Master Guru

    Messages:
    301
    Likes Received:
    0
    GPU:
    BFG 8800GTX OC2 626/2000
    Meh - I'm sick of these 'XP vs. vista' threads now... from what I've seen most people who say Vista sucks haven't even tried it (and no, a quick play at a friends house doesn't count). I have no reason to favour Vista, I just want what is best for my Web Design and playing some games. With my spec Vista x64 is by far the most stable and quickest OS I've used :)
     
  4. Dublin_Gunner

    Dublin_Gunner Ancient Guru

    Messages:
    4,642
    Likes Received:
    0
    GPU:
    Gigabyte Rx 460 4GB
    His arguments are all well and good (although not the OS or platform arguments. As a dev, you develop for the platform, not the other way around).

    He goes on about ray traced rendering as opposed to rasterising (as all current gfx hardware does).

    He however doesnt address the point that the reason the raster approach was chosen, was because of his very statement "a supercomputer would take hours to render just 1 frame".

    We just dont have the hardware for ray traced, real time graphics - it would be great if we did, but thats nothing short of a miracle.

    Also, he doesnt address the issue of a technology changover. While having ray-traced graphics would be superb, and having the hardwafre to do it would be superb - where is the cross-over point?


    Do you get the case where a ray-traced game is released, but cannot be played by anyone without a €700 ray-tracing card? (of which maybe <1% gaming population would have)

    To have such a dramatic change in direction of rendering would require a stop-start ethic. i.e. Devs would have to stop raster based games, in favour of RT, hardware venders would have to stop selling raster based hardware for gaming, in favour of RT.

    So where is the bridge? Such a dramatic change would inevitibly mean that devs & hardware manufacturers would have to take a HUGE drop in revenue for a time, merely to facilitate the changeover.

    Imagine all software companies suddenly decided "hey, x86 is no good, we're now developing solely for platform X"

    All of a sudden, everyones hardware becomes redundant, no one can run the latest software, devs dont make money, and hardware vendors dont make money while they're trying to convince people that patform X is better, and the only thing they'll sell.

    Sorry, it just makes no sense. (from a pragmatic point of view)
     

  5. Skiddywinks

    Skiddywinks Ancient Guru

    Messages:
    4,562
    Likes Received:
    0
    GPU:
    Asus Matrix 285
    Oh, I have no doubt that some enthusiasts also do not like it. In the end, it all comes to balance; for you, do the pro's outweigh the cons? Depending on the person, it either will or it won't. Just like everything else that you have to buy.

    But the number of people that end up with gimped systems and so have a terrible experience with Vista can not be helping the situation at all. This is why so many people see all Vista haters as clueless noobs; because the loudest people that do not like Vista ARE clueless noobs.

    Here for example, any one that has an issue with Vista explains why in a sensible manner with good reasons. Most people I know who do not like it say it runs terribly and their laptop is just crap with it. Then I show them they only have a Gig of RAM, and they have no idea what they are talking about.

    A lot of companies are too blame for making a quick buck. The Vista hate is definitely not in proportion to the people that actually have genuine complaints with it. Companies should not throw together a system and bundle in Vista unless it meets at least a basic level of specifications.

    For example, how well could this possibly run Vista (the $600 version)?
     
  6. F1refly

    F1refly Ancient Guru

    Messages:
    9,042
    Likes Received:
    0
    GPU:
    970GTX-oc edition
    the bridges are usually the consoles...the platform that seems to drive the majority of devs to the next level. if you recall the dreamcast release, most PC games didnt exceed 600,000 polys only cause devs didnt try mostly imo but quickly surpassed 1million within a year of DC's release. i noticed similar during xbox and the 360's introduction...where games developed for those platforms looked better initially than 90% of pc games which in turn seems to jumpstart that next step in mass for all platforms.

    i've said it before and yet again but i think the next "superchips" will be propriatary on a console first quickly pushing computer hardware that direction. i think if sony said right now..hey our cell 2 cpu has integrated gpu, intel and amd would say..hmmm and of course surpass it, at a price mind you

    but your right that for pc's there would have to be a raytraced game with optional rasterise..kinda like games now have both dx9 and dx10...if its possible that is. however i foresee the winds shifting hard for pc gaming by the time the next gen consoles are released.
    i'm just suprised no company has mentioned much on even pushing for a cpu/gpu on the same chip, it should push gaming far more foreward than what dx10 or nvidias stream processing has thats for sure.
     
  7. Denial

    Denial Ancient Guru

    Messages:
    12,162
    Likes Received:
    1,336
    GPU:
    EVGA 1080Ti
    How? CPUs are designed to execute out-of-order processes, while GPU's are designed to execute in-order processes. Combining the two would be a step back from when we separated them. GPU's can do hundreds of small math problems at the same time.. where as CPU's can only do a few depending upon the threading. The only benefit it would have is connectivity latency.. but even there the PCI-E 16x bus has yet to be fully tapped, let alone PCI-E 2.0.
     
  8. CronoGraal

    CronoGraal Ancient Guru

    Messages:
    4,162
    Likes Received:
    5
    GPU:
    MSI R9 390 8GB
    To be fair the post you're quoting and agreeing with was being quite hypocritical.
     
  9. F1refly

    F1refly Ancient Guru

    Messages:
    9,042
    Likes Received:
    0
    GPU:
    970GTX-oc edition
    uhm....no
    for one, having both gpu and cpu cores is way....way faster.....way faster
    plus i dont think you read the article...where even yapper Saint John mentions the power of both cores together...hence ray tracing potential power.

    maybe you thought i was talking about the same core doing the work of gpu and cpu? lol no thats impossible i believe.
    but i'm talking about the already long speculated rumors of multicores where you have multiple cpu's and multiple gpu's ore single gpu's/cpu's whichever all on the same die.
    there was much talk of that especially once AMD aquired ATI, i'm sure their working on it, just no official word that they are.
     
    Last edited: Mar 27, 2008
  10. Denial

    Denial Ancient Guru

    Messages:
    12,162
    Likes Received:
    1,336
    GPU:
    EVGA 1080Ti
    AMD Fusion is official.. it's been on the AMD roadmap set for 2009 time frame for sometime. Except AMD mentions the fact that it will only be used for low end graphics solutions. And like I said, the only real benefit of having a CPU core and a GPU core would be enhanced latency times. The amount of information the CPU and send to the GPU and back again would definitely be increased, not to mention the speed of which it is transfered. Except this is not an issue today as the only communication needed between the GPU and CPU is for DX API calls/Physics calculations and other such none of which is a bottleneck.

    Of course with raytracing you'd need additional bandwidth, but a single CPU/GPU solution may not be the most effective way of increasing bandwidth.
    Ray Tracing works by analyzing every single possible lighting collision and plotting those points on geometry. Any type of movement within the game becomes an issue because it can directly effect rays of light normally wouldn't be effected in game like today. For instance caustic effects brought on by glass light sources and such become a huge performance issue due to the way light shines through these type of surfaces. Bandwidth then would become an issue because every single movement brought on by a physics change or whatever would effectively destroy the scene - as such the scene would need to be redrawn by the GPU. Raytracing cannot be done faster then a raster type shader, ever. It's definitely more accurate for light sources and produces higher quality results with other phenomenon, but there is no "potential power" in raytracing. As far as performance goes.

    Having Intel release a processor with 4 cores and 2 gpu cores wouldn't do anything for performance that a quad core with 2 sli nvidia chips couldn't do. The current bottleneck isn't in the bandwidth, it's in the amount of processing power the actual GPU can do.
     

  11. Dublin_Gunner

    Dublin_Gunner Ancient Guru

    Messages:
    4,642
    Likes Received:
    0
    GPU:
    Gigabyte Rx 460 4GB

    CPU's can quite easily be in-order also. In fact, Intel are releasing a new CPU based on in-order execution.

    I think you're confusing it with FP units or something.

    GPU's are basically masssively parallel Floating Point units, where as a CPU can generally (contemporary desktop parts) only execute 1 or 2 FP calculations per clock per core
     
  12. Denial

    Denial Ancient Guru

    Messages:
    12,162
    Likes Received:
    1,336
    GPU:
    EVGA 1080Ti
    Of course they can be, the same way gpu can do out-of-order.. it's just a design paradigm. But it doesn't change the fact that GPU's are better at doing in order execution, because of exactly what you said.. it's a massive amount of small "cpus" doing extremely basic calculations. An advanced calculation wouldn't be able to pass through one of those calculators in a reasonable amount of time. A CPU on the other hand is awesome at processing large operations.

    The only way a CPU and be effective as a GPU is if it had a metric ****ton of cores.. which is exactly what Intel is attempting to do. 80+ advanced cores beats 80+ gpu simple cores any day of the week.
     
  13. quaker3

    quaker3 Master Guru

    Messages:
    652
    Likes Received:
    0
    GPU:
    MSI 6850HD
    best is..linux for everything except games..
    windows xp..games only.. - and there no need to defragment..do some nonsences..and so on.. :)

    linux doesnt need defragmentation..nor big pc requirements.. :) if only it could have directx..would be great to compare linux with directx vs windows with directx...
     
  14. Dublin_Gunner

    Dublin_Gunner Ancient Guru

    Messages:
    4,642
    Likes Received:
    0
    GPU:
    Gigabyte Rx 460 4GB

    Definitely - but at what cost? I know Intels initial 80 core sample was a proof of theory type project, mostly showing their intercore interconnects & bandwidth capabilities, as those cores were relatively simple.

    If we were talking 80~ cores on a single die with current processing technologies, that would be one massive chip, costing 10's of thousands to produce, even if it reached mainstream production.

    The fact remains, we are nowhere near having anything capable of producing real-time raytracing, in a managable package, at a managable, marketable cost.
     
  15. Denial

    Denial Ancient Guru

    Messages:
    12,162
    Likes Received:
    1,336
    GPU:
    EVGA 1080Ti
    Yea I was going to mention cost at the end but decided to just leave it out. Thats why I posted what I posted in the first place, it just isn't reasonable to combine both the GPU and CPU. The only real performance increase will come in bandwidth which isn't a bottleneck at the moment. XRC6 says "way faster way faster" but that isn't the case. Maybe if bandwidth was an issue we'd see additional speeds, but it's not.

    And I agree with you on real-time raytracing, I don't care what this guy's done in the past.. were just no where near that level.
     

  16. Black_Falcon

    Black_Falcon Banned

    Messages:
    10
    Likes Received:
    0
    what is this direct X you speak of all i know is the mighty power of the X rangers my elite squad on sector 7 in dimension 69 join me for the trip to Holy land
     
  17. Dublin_Gunner

    Dublin_Gunner Ancient Guru

    Messages:
    4,642
    Likes Received:
    0
    GPU:
    Gigabyte Rx 460 4GB
    :spam::ban:
     
  18. lmimmfn

    lmimmfn Ancient Guru

    Messages:
    10,347
    Likes Received:
    51
    GPU:
    AorusXtreme 1080Ti
    You do know intel is getting 20FPS currently on quad cores with the Quake engine ray traced by that german guy with 4 levels of reflection? 8 cores next year, probably 16 the next year, basically in 8 years time we'll probably have CPUs 200 times more powerfull than the current ones if now 1000 times faster.

    Ray tracing is where its definately going and will be a great boost to PC gaming because o all those retailers selling PC's with great cpu's and crap gpus
     
  19. Dublin_Gunner

    Dublin_Gunner Ancient Guru

    Messages:
    4,642
    Likes Received:
    0
    GPU:
    Gigabyte Rx 460 4GB
    http://www.pcper.com/article.php?aid=334&type=expert&pid=4
    16.9Fps, at a resolution of 256x256 (65k rays) w/ QX6700

    So for a more realistic resolution, say 12x10 (1,310,720 rays), which is 19x more work, that would be less than 1Fps. (as can be seen, it seems to scale almost linearly).

    So are we far off?? Hell yes we are.

    And remember, thats with Quake4RT - not exacly the nicest looking engine going.

    Admittedly, a step in the right direction - but to play that same scene at the same FPS @12x10 would take the CPU power of 19 QX6700's - and 16.9FPS isnt what I'd call playable either.
     
  20. Denial

    Denial Ancient Guru

    Messages:
    12,162
    Likes Received:
    1,336
    GPU:
    EVGA 1080Ti
    You left out the part where the tracing for those scenes was precalculated. I can pull of 20FPS in 3DSMAX by turning of recalculation on each individual frame. Also the geometry in that game is trivial. Even at 5K polygons per wall a CPU can easily handle that. Up the polygon count to a million and watch it even render a frame an hour - this where a GPU shines, calculating small stupid crap like polygons. Which is why the guy mentions combining GPU and CPU's, both are needed for raytraced games. Nothing will change.

    It's also not like you can't test this yourself. Download a demo of 3DSMAX/MAYA/SOFTIMAGE and model a object with a couple thousand polys on it. Apply a raytraced texture + a couple of light sources with raytraced shadows mapping and render it using vray or mentalray or any other type of ray trace render. Look at the render times - they are upwards of 5+ minutes for a single frame.
     

Share This Page