780 TI 6GB Version

Discussion in 'Videocards - NVIDIA GeForce' started by blakedj06, Jan 11, 2014.

  1. Loophole35

    Loophole35 Guest

    Wow they are claiming that a 770 alone takes an extra 225W that seems a bit high. If you have a quality 750W unit you can run sli770's. That much is true. That site was just advising a healthy amount of head room.
  2. PhazeDelta1

    PhazeDelta1 Guest

    Headroom on a PSU is like a condom. Id rather have it and not need it, than need it and not have it. :nerd:
  3. wheeljack12

    wheeljack12 Guest

    yes, a healthy amount of head room for the rest of the pc. I am more preventative maintenance than caution in the wind. This is quoting right from the same page from guru 3d that eclap just showed me:

    "Above, a chart of relative power consumption. Again the Wattage shown is the card with the GPU(s) stressed 100%, showing only the peak GPU power draw, not the power consumption of the entire PC and not the average gaming power consumption."

    In other words, I agree PhazeDelta.
  4. Fox2232

    Fox2232 Guest

    Vram argument is all about main s-treaming. If enough people get 12GB vram in next 2 months, developers will start to use such amounts of memory and next year we'll have games which will run like crap on poor 2/3/4GB vram cards.

    We do not have such cards available to mainstream, therefore there would not be sane developer which will make game with such requirements.

    Someone simply has to make this move towards higher amount of vram and it's HW manufacturers supported by market demand.

    People with 3 screens and high resolutions were minority for very long time. 4k monitors may change it during 2014 if they drop in price to be reasonable. And that's breaking point where developers will start making games which will use 5/6GB of vram.

    We already have few dozens of games which have texture pools higher than 2GB, question is if more than 2GB of textures are needed to render single frame.
    - no = required textures will be cached once, causing hitch and rendering will be fluent till another huge set of data is required
    - yes = for every frame some of resources will have to be fetched, which will decrease performance based on time it takes to get such data from system memory.

    The "no" case is usual and most have went through such situation, moving forward in fps game is fluent, turning around causes slowdowns as resources are transferred and then moving in new direction is OK again.

    For "yes" case, someone can record his experience with tri-fire HD5870 1GB as that has a lot of performance and very low amount of vram.
    Considering that each card have to get those new resources separately on limited PCIe bandwidth, there should be negative scaling.

  5. eclap

    eclap Guest

    what are you on about? I did say a single gtx 770 consumes around 200w, didn't I? why do you have to quote the single gtx 770 power draw from that article for me when I already told you what it is?

    erm, learn to read maybe? there's this too: System Wattage with GPU in FULL Stress = 304 Watt

    that's an overclocked 3960x rig with a single gtx 770. add another gtx 770 at around 200w power draw and you're looking at 505W power draw. at 100% load. so yeah, 750w is enough.
    Last edited by a moderator: Jan 12, 2014
  6. wheeljack12

    wheeljack12 Guest

    Ok, Eclap, I didn't read. That aside, I wouldn't risk it regardless. The risk maybe calcualted in terms of working, I just don't have the cash to throw around on something not paying off for me personally.
  7. wheeljack12

    wheeljack12 Guest

    Fox, you just showed me something I have talked about before. When Maxwell comes out, NV wants to unify memory totally, not just make it a stat in windows wei or dxdiag,etc. Why do we see sony thinking ahead of the pc and using gddr5 as it's unified memory. I would wish that companies who propose to jedec would propose ddr 5 instead off ddr 4. Match the performance of the gpu's memory and cut out the BS so unified memory can actually be a benefit to the NV user at least until AMD comes up with something similar.
  8. Fox2232

    Fox2232 Guest

    DDR4 are low power consumption chips, ddr5 are power hungry. Sony had ddr5 as unified for GPU/CPU because they use APU and want to get maximum performance from it. I like idea of scalable addressability.
    I am for some time thinking about A10-7850k as a development platform, but to my knowledge it would have to clock to 5GHz to match i5-2500k on 4.5GHz.
    So I may later push trigger with huma notebook as they promised kaveri there too in mid 2014.

    nVidia plan on unifying memory across multiple gpus, that's nice, but it will require much faster bus than PCIe 3.0 and optimally it would have to be directly between GPUs. May need to have new huge SLI bridge developed or it will work only for dual chip boards.
    In both cases sharing memory would be pretty expensive for end user due to additional lanes/transistors.

    I believe it may be easier to make GPUs as dumb block workhorse and make 3rd chip which would connect them together as memory controlled, cache and scheduler so they will be seen for OS as one chip.
    Something along line of intel C2D. It may look hard, but AMD is pretty close as they managed to pull 512bit bus for r9-290(x) while using around 20% less transistors for memory controller compared to 384bit HD7970/50.

    I like this possibility too, but getting there requires not only idea, but right decision for approach as there have to be some part shared and sharing too much or too little will prove to have negative effect on performance, cost or both.

    And btw. to the memory, I have high hopes for hybrid memory cube technology, as it's already low power draw per Gbit and can transfer data pretty fast + platform base does not have to mess with timings and stuff. I guess it will be in 1st servers in 2015 (even while it looks ready now) and consumers will get it 2016.
  9. V@IO

    V@IO Guest

    Cool, although i think its somewhat pointless with Maxwell just around the corner.
  10. Koniakki

    Koniakki Guest

    On the contrary. From what I'm hearing we are still way off for Maxwell. Possibly Q4 2014/early 2015.

    I hope I'm wrong so we have it sooner but from in another way I hope I'm not because I just received my 780Ti like half a week ago.. :p

  11. Loophole35

    Loophole35 Guest

    They can take the time they need I'm not hurting for any performance on either of my systems. I'd rather they get it right out of the box and not have the phantom 680 incident again.
  12. It will probably be something exorbitant like over $100 on top of the base $700. But I don't care I can't wait to get two of these just for the bragging rights. Who cares if its expensive. Its worth it because its Nvidia right? right?

    It'd be nice to max out my 30hz 4K monitor as well.

  13. Loophole35

    Loophole35 Guest

  14. ---TK---

    ---TK--- Guest

  15. wheeljack12

    wheeljack12 Guest

    butthurt, more like the butt plug you will need to protect yourself from the dark side. hehehe. Anyhow, video cardz.com showed a snipit from a rumoured source that maxwell is ready to go as of february. They want to put it on the 28nm process. Again, what is most likely real is the 20nm availabilty from TSMC not due until q3 2014 at least. I don't like the word "rebadge" in my diet twice in two gpu cycles. As long as NV can still somehow squeeze transistors and cuda cores onto 28nm, no problem. New tech with the same face, not so good.
    Last edited by a moderator: Jan 12, 2014

  16. wheeljack12

    wheeljack12 Guest

    something you guys didn't notice. Gtx 760 is one half of a gtx 780 (1152x2=2304). So, somehow gtx 780 ti got more cores than expected. The one possibility is gtx 860 (the rumors showed 860 and 870 models coming 1st in feb) is half a gtx 780ti at 1440 cores. Don't know what 870 would be. you have to remember the smart marketing portion of nvidia at least for the 760. It was priced originally under a gtx 780 to give users a choice of one expensive card at one time or two cheap ones. Well, that went out fast at xmas.
    Last edited by a moderator: Jan 12, 2014
  17. Fox2232

    Fox2232 Guest

    20nm is working for quite some time, but as presented by nVidia critics, it's far from economical and both nV and AMD decided not to spend again money for being 1st to get there.
  18. wheeljack12

    wheeljack12 Guest

    I can translate that. TSMC started treating AMD/NV like the consumer. Bleeding edge costs money. Fitting if you ask me.
  19. wheeljack12

    wheeljack12 Guest

    Fox, the tech I would love see come to fruitition is the memsistor. Static system memory. If implemented, could replace the storage drive period as from anyone who uses dataram's ramdrive, you get the fastest data transfers from memory.
  20. Loophole35

    Loophole35 Guest

    A 760 (1152 cuda cores or 6 SMX clusters) is half the number of cores as the 780 (2304 cuda cores or 12 SMX clusters) the 780Ti has 2880 or 15 SMX clusters. Plus the differences in ROP. The GK110 has a total of 15SMX clusters on die the Titan had one disabled the 780 had 3 disabled the 780Ti is a full GK110. Also the 760 is on the GK104 chip which is only 8 SMX clusters on die but has two disabled.

    Edit: On a side note learn to edit your post if you want to add something one post after another is getting annoying and making me think you are post farming.
    Last edited by a moderator: Jan 12, 2014

Share This Page