45% faster PC anyone. New 28nM process looking good!

Discussion in 'Frontpage news' started by Mufflore, Nov 4, 2011.

  1. BlackZero

    BlackZero Guest

    It's nostalgia as games like zelda/worms/final fantasy were all 2d and low res and games weren't even 3d until the late snes/playsation days. PC games likes doom were pixelated to hell and back (pun intended). Another revolutinary 3d game was golden eye and just looking at the blocky polygons makes my eyes hurt.
     
  2. Spets

    Spets Guest

    Messages:
    3,500
    Likes Received:
    670
    GPU:
    RTX 4090
    Tried playing golden eye just last week :p Remote mines were always fun for me

    Back on topic, can't wait to see what both teams bring out, just wish the high end cards weren't marked for Q2 :nerd:
     
  3. Zboe

    Zboe Guest

    Messages:
    533
    Likes Received:
    0
    GPU:
    GALAX GTX 970
    No, because discounting dual GPU cards it is the best AMD has. It can't compete with the 580 and that's why AMD has to be careful how they priced the 6970.

    Cost is so close it's splitting hairs. Performance wise in games if you didn't know what card was in the system you wouldn't have a clue either way.


    The power usage of the 2 cards is so close you would never be able to tell the difference.

    As for Vram, how many people play at 2560x1600 or higher? And then 2Gb isn't really enough anyway so it's a moot point. Vram is a selling point, always has been and always will be. Some people just want more because they think it makes the card better regardless. AMD capitalizes on this.

    6970CF performance wise, again it depends on the game. If you didn't know what cards were installed you wouldn't be able to tell a difference. So all that about "much better performance" is crap.


    -edit-

    1Gb of Vram is low for HD+ resolutions but that doesn't mean you need 2Gb either.
     
    Last edited: Nov 5, 2011
  4. BlackZero

    BlackZero Guest

    All your points are moot, the 'various' reviews and charts in my last post show that, infact at multimonitor reolutions 570 sli can't even touch hd 6970 cfx as it competes with gtx 580 sli (hardocp review) and the ~50watts extra power usage of 570 sli is a huge difference and any reasonable person would see that.

    As for discounting dual gpu cards, why exactly should that be the case when you clearly posted that ATi can't compete with nvidia in terms of performance when infact they have the fastest single card...

    anyone running high res/multimonitor would be able to tell the difference between 570 sli and hd 6970 cfx as 6970 cfx is way more superior :stewpid:
     

  5. Zboe

    Zboe Guest

    Messages:
    533
    Likes Received:
    0
    GPU:
    GALAX GTX 970
    Superior, ROFL.

    Most lightbulbs use more than 50 watts. If I gave you two systems, one each with a 590 and 6990 but otherwise identical you wouldn't be able to figure out which system had which card.
     
  6. BlackZero

    BlackZero Guest

    Your light bulb seems to have gone out... first you talk about ati not competing and this is shown to be wrong now you are trying to show that nvida can compete.. how the tables have turned :stewpid:
     
  7. JohnMaclane

    JohnMaclane Ancient Guru

    Messages:
    4,822
    Likes Received:
    0
    GPU:
    8800GTS 640mb
    I think you guys do not get the news.


    45% improvement in transistor performance over the previous 40nm node != 45% increase in GPU performance, given how sucky TSMC's 40nm node was I wouldn't expect much.

    Also the increase in performance is mainly due to the high dielectric constant within the gate which Intel has been doing for a while.

    At the end of the road TSMC is doing nothing special here.
     
  8. bokah

    bokah Guest

    Messages:
    2,316
    Likes Received:
    0
    GPU:
    EVGA 670GTX 2G
    i expected 20nm and going for 3rd dimension (i.e upwards) for building em would double or triple the performance (not really sure if they use that 3rd now or not)

    45% is very disappointing
     
  9. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Or you could have just read the article like some of us did.
     
  10. AbjectBlitz

    AbjectBlitz Ancient Guru

    Messages:
    3,463
    Likes Received:
    2
    GPU:
    R390 1200/1720
    Because then games had thick/wide simple geometry that hid it better. Just a bunch of joined up square/rectangle rooms with squarish objects strewn about, it was all about textures rather than complex geometry. As games are getting more advanced, geometry is getting more complex, thinner sharper edges everywhere it shows up far more jaggies. As does the larger maps and larger viewing distances.

    Larger screen also show it up much more, before you had a 17" 4:3 screen running at 1600x1200 or 1920x1200 or even more which eliminated jaggies completely. Now you have 24" wide screen with a much larger viewable area with only 1920x1200 or even worse 1900x1080

    Back then I never used AA as 1600x1200 rez pretty much eliminated the need, now with 1920x1200 24" screens I need 2x AA minimum, preferably 4x.
     
    Last edited: Nov 5, 2011

  11. TruMutton_200Hz

    TruMutton_200Hz Guest

    Messages:
    2,760
    Likes Received:
    1
    GPU:
    Iris Xe
    [​IMG]
     
  12. TheHunter

    TheHunter Banned

    Messages:
    13,404
    Likes Received:
    1
    GPU:
    MSi N570GTX TFIII [OC|PE]
    why, dont be all surprised when it does.. I have no doubt it will be a monster gpu and much much better than Kepler.


    Ok except when it comes to drivers lol, but this new next gen core will be friendlier then VLIW so no more very long instruction words :D
     
    Last edited: Nov 5, 2011
  13. nexu

    nexu Maha Guru

    Messages:
    1,182
    Likes Received:
    0
    GPU:
    HD4870 512MB (@795/1085)
    AMD Catapults?

    Nobody would probably notice it if they replace AMD Catalyst with AMD Catapults.
     
  14. k3vst3r

    k3vst3r Ancient Guru

    Messages:
    3,703
    Likes Received:
    177
    GPU:
    KP3090
    Some of you guys need read article again, it's saying 45% improvement in speed as in they can get 45% higher clock speeds on 28nm process. They aren't saying next generation cards will only be 45% faster just the process allows for 45% better clock speeds.

    Imagine 680GTX consumes 250w of power has 768 cuda cores on 384bit or 512bit memory bus with 3GB or 4GB GDDR5 running at core speed of 1.1GHz an shader speed of 2.2GHz

    Compared to 580GTX consumes 250w 512 cuda cores 384bit memory bus with 1.5GB GDDR5 running at core speed of 772MHz an shader speed of 1.54GHz

    Easy talking 100% better fps for 680GTX
     
  15. The_Fool

    The_Fool Maha Guru

    Messages:
    1,015
    Likes Received:
    0
    GPU:
    2xGIGABYTE Windforce 7950
    Glad I upgraded to a good CPU to support the new GPUs.
     

  16. JohnMaclane

    JohnMaclane Ancient Guru

    Messages:
    4,822
    Likes Received:
    0
    GPU:
    8800GTS 640mb

    TSMC is still running on planar tech.

    Also again we are talking about TRANSISTOR PERFORMANCE which is not computational power.
     
  17. Sever

    Sever Ancient Guru

    Messages:
    4,825
    Likes Received:
    0
    GPU:
    Galaxy 3GB 660TI
    if the article is true, performance could be much higher than 45%. 45% potentially higher clock speed plus much more shaders can lead to a dramatic performance improvement.
     

Share This Page