2500K Or 2600K for gaming ?

Discussion in 'Processors and motherboards Intel' started by Infinity, Jul 29, 2011.

?

which is better For gaming ?

  1. Core i7 2600K

    34 vote(s)
    31.8%
  2. Core i5 2500K

    73 vote(s)
    68.2%
  1. ElementalDragon

    ElementalDragon Ancient Guru

    Messages:
    9,351
    Likes Received:
    30
    GPU:
    NVidia RTX 4090 FE
    yea.... wouldn't exactly call "utilizing" all 8 threads to be having only two hovering around the 80% usage mark... and the other six hovering around 50% usage.
     
  2. BlackZero

    BlackZero Guest

    That's what everyone said when I bought a q6600 in early 2008, by early 2011 when I finally needed to upgrade to the 2600k the same people had gone through multiple cpus.


    Please explain, do you mean because the cores are not fully loaded or are you saying even if they were fully loaded there would be no performance differencial?

    Add another gpu for sli/cf or when more powerful gpus come out I'm sure that'll change.
     
    Last edited by a moderator: Aug 2, 2011
  3. Xtreme1979

    Xtreme1979 Guest

    What would you call it then? Sitting idle?

    Utilize: put into service; make work or employ for a particular purpose or for its inherent or natural purpose
     
  4. ElementalDragon

    ElementalDragon Ancient Guru

    Messages:
    9,351
    Likes Received:
    30
    GPU:
    NVidia RTX 4090 FE
    For the first part.... i think what he's saying is that just because the cores are being "used", doesn't necessarily mean they're making the idea of having them worthwhile. Which also ties into your second comment on what i posted. Hyperthreading is hardly worth it in a gaming stance. Even with games that DO make SOME use of all 8 cores... why try to justify it's purchase by adding more or buying a more powerful GPU? You seem to have done the same thing i did in terms of upgrading. Stuck with a CPU until it seemed worthwhile to upgrade. The CPU i had prior to my 2500K was a Q9450. other than a fair performance boost and a bit lighter power usage and heat generated, i upgraded for SATA3, USB3, DDR3.... and because i absolutely couldn't stand the Abit board i was using with my Q9450. I also tend to wait to upgrade my video card just the same. Went from an 8800GTX, to a GTX 295, now to a GTX 570. The video card was almost more of a side-grade than an upgrade... but again... less power, less heat.... same or more performance, plus DX11.

    Xtreme: Yea... if you're going to quote definitions... at least make sure you find ALL possible definitions. You kinda missed another key one.

    Do you consider a touch over 50% of overall CPU usage to be "worthwhile" use of the CPU? Would you consider averaging 24mpg in a car that's said to average 40mpg to be worthwhile?

    the 2500K over the 2600K is roughly a $100 bump on average.... and there's basically nothing so far that makes the jump from 4 physical cores, to 4 physical/4 theoretical cores seem worth that money. Just like there wasn't really much of a point in going for a Core 2 Extreme when the Core 2 Quads could match or outperform them on occasion.... and that was MUCH more of a difference in price than just $100.
     

  5. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    Basically what I said, but didn't know that WoW is even multithreaded, the engine looks ancient, even older than Source at times lol.

    That's because HT was just tacked on to the x86. x86 was never meant to have SMT, Intel forced it on. It doesn't work as well as Intel wants it to. In theory, SMT works wonders, it totally does on IBM's POWER and Oracle's SPARC families, but it's horrible on x86.

    SMT on x86 is overhyped and really work.

    Intel developed it mainly for professional applications and server workloads. Those are the only times when SMT can really be advantageous on x86. Other than that you'll see little to no returns with SMT on x86 (or "HyperThreading" as Intel likes to call it).

    deltatux
     
  6. Xtreme1979

    Xtreme1979 Guest

    Didn't miss it it also applies here. Your incorrect use of utilize is what you're trying to defend. Or is it your choice to purchase a 2500K?

    As opposed to running all four cores at 100% with nothing left to give?

    Huh? What? Question makes no sense since we're talking about the same family of processors...


    You're entitled to your opinion but don't preach it like its the word of god. Just sounds like your trying to justify you're own purchasing decisions.
     
  7. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    Even if you run all four cores at 100%, it's basically the same as 4C/4T running at 50% (if all the cores are taxed and not just some). Remember, you don't magically get more physical hardware out of nothing. 4C/4T basically splits a single physical core into 2 logical cores. If the 4 cores are maxed at 100%, that means it's also the same as 4C/4T maxed since the four physical cores are already maxed out. Even SMT cannot offer any more performance if the physical cores are maxed to their load limits.

    The only thing SMT offers is to allow an extra "core" in the software level so that the operating system can schedule another thread down the same pipe while the other thread has stalled due to it waiting for resources or user input. So the core won't be idle when that's happening. In theory, it should make the CPU more efficient. However, resource contention usually occurs instead because the two threads are being crammed down the same pipe and usually both are active at the same time and may also be requesting the same resources at the same time which one has to wait for the other to finish off that request.

    This is why SMT only works better in theory. In reality, the performance gain you get is little to nothing.

    deltatux
     
    Last edited: Aug 2, 2011
  8. ElementalDragon

    ElementalDragon Ancient Guru

    Messages:
    9,351
    Likes Received:
    30
    GPU:
    NVidia RTX 4090 FE
    Not sure what thread you're reading, Xtreme, but it's not only my opinion. Just about everyone here also says that the 2600K isn't all that worthwhile.

    1) then you didn't post it cause it doesn't really help your arguement any.
    2) i'd rather pay $250 for a processor and have it used to it's full potential then pay $350 to get the same result.
    3) the same general idea still applies. Why get something just because of what it's possibly capable of rather than what it's actually capable of? Why pay for 8 threads when it's barely using 4?
     
  9. BlackZero

    BlackZero Guest

    Strange comment, if a game makes use of all 8 threads then it clearly 'utilises' it, just because he's running a gtx 560 ti which doesn't fully make use of all the cpu's potential doesn't mean the game's not 'capable' of making good use of all the threads. That's like buying a gtx 580 to be used with a dual core processor and then blaming the gtx 580 for not being able to run any faster.

    It's a well documented fact that ht cores when utilised add as much as 30% to performance, I also do a lot of video encoding so i don't need to justify my purchase from a gaming point of view but I don't see how you can deny the fact that ht on the 2600k when in full use does add upto 30% performance regardless of whether it's an application or game. It's also well known that the frostbite engine is etremely efficient at utilising ht cores.


    The point I was trying to make with my original comment was that when I bought a quad core a lot of people on these very forums said it wasn't worth it for games and many even went on to buy the e8400 before realising their error and then they moved to q9xx processors, I, on the other hand, was sitting pretty on my q6600 for a good 3 years having bought the better processor at the time.

    That same q6600 got me through an 8800 GTS/ 8800 GTX/ 8800 GTX Sli/ gtx 280/ gtx 295 and I even ran my HD 6950 on that processor for a couple of months before upgrading to the 2600k. All that because I bought the more powerful cpu to begin with. I do however enjoy changing my GPU quite often so a decent cpu is a must really as it would be harder to justify upgrading both as regularly.


    I think I answered that above in my gtx 580 analogy.


    The 2500k can not match a 2600k no matter how much you overclock it as the 2600 has a larger cache. Even if you disable HT and overclock both to the same level the 2600k will still be faster. Here in the UK a 2500k costs £170 and a 2600k costs around £235, that's roughly a 30% difference and if you use the correct applications then that's well worth the money.

    I can't predict which way games might be going but I do know that battlefield 3 will be taking advantage of the ht cores and don't see why other games will not be following suit soon enough, especially considering the amount of gamers with HT cores nowadays.


    Really now, Deltatux. So I guess the 30% performance bump we see when video encoding with HT cores is due the magicians over at Intel?
     
    Last edited by a moderator: Aug 2, 2011
  10. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    Let's talk solid numbers now, that "30% performance increase" is exactly how many FPS faster? If it isn't more than 10 fps, it really isn't much to justify a $100 increase in price to be honest. In addition, that's an up to statement, how many games can demonstrate that up to number. Many ISPs advertise up to speeds, but not many ISPs really go up to that speed especially for people using DSL.

    The extra 2 MB L3 cache hasn't really shown much of a performance difference even if you disable HT on the 2600K and compare it with the 2500K.
    Like I said, SMT performance is really based on application used. Some threads that are part of an application may stall often to free up the cores. For other threads in other applications who use the cores more efficiently, do not often have to wait for resource or require constant user input will not see much of a performance increase because now that you have two threads that are working concurrently in the same core will now have to fight each other for core time and resources.

    Remember, even with SMT, each core can only execute one thread at a single time. The only reason SMT works at all is that the scheduler has the capability to save a running thread's state while it's stalled and quickly switch to a thread that's ready to process on the processor. It constantly does this switching based on when both the thread and resource is ready for processing. If more than a single thread is ready to process, the scheduler then has to decide which one to process first. If it chooses one it thinks has a higher priority than say your game or video encoding process, you'll see a performance dip or no performance increase. When I mentioned that 4 physical cores and 4C/4T implementation really doesn't have any difference in the terms of processor load is that many people seem to think they suddenly can get 8 cores out of 4 which is impossible. If all 4 physical cores are completely loaded, all 8 of your SMT "thread cores" are technically taxed as well. Even the scheduler may not be able to help ease the load even if it has to constantly switch between threads. In most cases it can't. In some cases yes, I wouldn't be surprised to see there's that odd 30% increase, but it doesn't happen often. Even if it does, that 30% increase is like 5 fps extra or something.

    At the end of the day, you're still using the same hardware, the difference is just how the scheduler handles the threads.

    EDIT: Is it me that only 2600K owners are the ones that are backing the 2600K just solely based on the fact that it has SMT? lol.

    deltatux
     
    Last edited: Aug 2, 2011

  11. TwL

    TwL Ancient Guru

    Messages:
    1,828
    Likes Received:
    0
    GPU:
    2x5850 2x6950 + 9800GTX
    well, just for honest opinion.

    HT = crap idiotic design paper release marketing
    Real cores = only thing that matters (including 1 Bulldozer module = 1 core)
    Someone mentioned future? = 8-XX REAL cores not some faked crap that heats a single core/module up.

    + We should already have the technology which doesn't need such cooling. All of current world so called technology development is 10 years old already. :)
     
  12. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    The performance gains from using HyperThreading are very well known and proven to be application dependent. Also, not all games use the "frostbite engine", so using that as an example is a rather extreme stretch to justify buying an HT enabled processor. In fact, I can only find 8 games that do us it and 1 of those games only uses it for Multiplayer. Games such as World of Warcraft actually show a performance loss from HT, as do many other games from my understanding.

    lol, my ISP is the opposite. They claim "up to 15mbps"...and I've yet to see my connection get that slow. I regularly hit 20-22mbps...
     
  13. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    lucky bastard, mine caps at my up to speed ... my ISP isn't bad since I always get the speed I paid for, I just hate my bandwidth cap.

    anyways, lets get back on topic :p.

    deltatux
     
  14. BlackZero

    BlackZero Guest

    Isps not delivering the performance promised, so we're talking apples and pears now? the fact that you live a certain distance from an isp can not be changed, but you can change the applications you use on your pc a little more easier, and what isn't more than 10 fps? that 30% is not a theoritical number, it's a fact if an application makes full use of the HT cores then you will get 30% of the intital number, not sure what 10 fps has got to do with anything. The gtx 580 delivers around 20-30% more performance than a gtx 570, but only when you pair it with other components that can match it otherwise it's a waste aswell.


    It's there to aid the ht cores primarily, but if you use the correct application it will obviously perform better than a lesser cache. Just because some applications don't make use of it doesn't mean it's of no use to another.

    I didn't say all games use the frostbite engine....it's an example of 'a' game that uses ht cores... don't know where you got the idea that i was justifying buying a 2600k based on a single game... I certainly didn't write it.

    I also don't have a crystal ball that tells me exactly what 'future' games will be doing, but, I said that based on the fact that the most popular game currently available and the most eagerly awaited game is going to be using ht cores I would not be surprised if other games followed suit.

    I, however, have more uses for my processor than just gaming, which I already wrote, but even from a gaming perspective if i'm going to buy a processor then I want it to last a few years so i don't see why I should not buy something that's closer to the top of the range and already provides better performance in a host of applications/some games and has the potential for further improvement in games rather than something which costs less and will provide less performance in many applications and has no potential future benefit.
     
    Last edited by a moderator: Aug 2, 2011
  15. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    You do realize that there's nothing in the WIN32 API that allows an application to differentiates between a physical core and a SMT core except for a mere detection to see how many you have, but it can't really dedicate it to only "real" cores or SMT cores. So application programmers can't just simply program applications to run on SMT more efficiently. They can make them to use each thread efficiently, and that's about it.

    Also, that up to 30% isn't guaranteed and most applications can't reach that "up to" number. Only a select few. At this point, you're basically gambling your $100 hoping that one of your application will give you that "up to" performance.

    Doesn't seem you understand how cache works do you? You can't make application use cache any better. Access to the caches are handled by the CPU, the best a software developer can do is make their applications more memory efficient so that the CPU may have more cache hits than misses. The cache hit/miss ratio is determined by how well the CPU's architecture can handle the cache and how it will compensate for cache misses. More cache doesn't necessarily make performance better. If you have more cache but that the CPU always have cache misses more than cache hits, then the extra cache size is basically wasted on a performance standpoint. However, if the CPU has more cache hits than misses, then the larger cache may help.

    The cache on a CPU is basically to load instructions and data pre-emptively, having the logic window and scheduler predict what data is needed from main memory before it's actually needed. If the prediction is correct, it's considered a cache hit. If it's incorrect, then it's called a cache miss. All modern CPU architecture still have cache misses. It's nearly impossible for the CPU get the guesses correct 100% of the time.

    By preloading data and instructions to the cache makes processing faster, but if there's a cache miss, the CPU will have to unload the data and load the correct data from main memory when the CPU needs it which slows down the core. This is why having more L3 cache doesn't necessarily mean better performance.

    A larger L3 cache means that depending on how CPU caches data, you can cache a bit more just in case in theory, but in reality, it doesn't seem to improve performance. Unfortunately, cache uses SRAM which is very expensive. SMT doesn't really cost much money, SRAM does. Often CPU designers would lower the set-associativity when they are able to fit more cache into the CPU. The lower the set-associativity, the faster the cache and the predictions go but at the same time, more misprediction will happen. Thus the need for a larger amount of L3 cache. However, if the set-associativity is the same, in theory a larger cache would be better, but that has yet to be really realized.

    deltatux
     

  16. Xtreme1979

    Xtreme1979 Guest

    Interesting that most of those people own AMD or 2500K chips like yourself. How many 2600K owners have chimed in stating their unhappy with their processor and wish they had invested their $100 differently?


    The facts speak for themselves so there is really nothing to argue.

    True then you can upgrade to the faster CPU when today's full potential is no longer enough. Power to you! Your choice just don't go around saying 8 threads aren't utilized (used for the layman) when it's clearly evident they are.

    Sounds like the dual core preachers when quad cores were released. Benches and my own screenshot, which started this big waste of time, prove that 8 threads can be used when needed. Most of the software being released is multithreaded UTILIZING three or more cores with scaling, and that fact is only going to be amplified more moving forward.

    I leave this post with my last piece of factual evidence of which CPU is better at gaming by posting a multithreaded gaming benchmark from our very own Hilbert:
    http://www.guru3d.com/article/core-i5-2500k-and-core-i7-2600k-review/21

    When the CPU is stressed at a lower resolution, which quad core comes out on top? Meh maybe it's the 2mb extra L3 cache, or extra 100mhz. :rolleyes:

    Goodnight and peace out..
     
  17. BlackZero

    BlackZero Guest

    Why don't you come down from your know it all high horse and maybe consider that the engineers at intel might know a little more than you? all you just wrote is a bunch of might be or might not be, nothing of any significance yet you say you know it all.. so I could just as well say that 'if' the cpu makes full use of the cache 'then' the cpu performs better... hows that any different from what you just wrote? am I getting you right?

    I think there isn't much more to be said here.. really:)
     
    Last edited by a moderator: Aug 2, 2011
  18. deltatux

    deltatux Guest

    Messages:
    19,040
    Likes Received:
    13
    GPU:
    GIGABYTE Radeon R9 280
    All you've basically shown is that you can only get said performance, "if" you run this app, "then" you get the 30% increase. You haven't really shown me that across the board you will see said 30%.

    That's the whole issue with SMT, it's all an "if". You never get a guaranteed performance increase and the rate of increase is all application based. It's all theoretical.

    Maybe if you realize that SMT isn't as hyped as Intel wants you to think. I know for sure the Intel engineers know what they're doing. However, it's a different story when the marketing department wants to sell you the product and overblow it.

    If you've taken basic hardware course in university, you'll understand basic things about caches and how CPUs work and wouldn't have made your statement of the size of the L3 cache helping SMT.

    You still haven't explained how that $100 can be justified by the very circumstantial performance gains. On average, SMT and the extra 2 MB L3 cache give little to no performance increase to justify that $100. I've even used technical explanations based on basic hardware knowledge from like Hardware 101, but you still can't give me a reason except saying "Intel's smarter than you" or that applications just need to know how to use SMT efficiently. Well of course Intel engineers are smarter than me, but that doesn't justify getting the 2600K over the 2500K. In addition, that 30% increase that you keep professing maybe something as small as a 10 fps increase which is still miniscule if I have to pay $100 extra for it. I may as well spent the difference on a better GPU that gives me more than that 30% increase.

    In addition, I also explained that you can't just make applications work better with SMT. There's no API facility that assists software developers to use SMT more efficiently, all they can do is to code for better thread management and better thread efficiency which would mean that it will scale as well with physical cores which means why bother with SMT? However, in reality, not many application can scale that well or perform so well to justify the cost difference between having SMT and no SMT.

    Can you expect someone to justify spending $100 to get a theoretical increase that's application dependent and that most application cannot utilize at all? If the price difference between 2500K and 2600K was say $50 or less, then I would say 2600K is the better bet but the price difference now is $100.

    Having real cores is always infinitely better than spending on SMT.

    deltatux
     
    Last edited: Aug 2, 2011
  19. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,628
    Likes Received:
    1,119
    GPU:
    4090 FE H20
    Deltatux, i think you are going to be majorly dissappointed when you see amd 8core bulldozers NOT outperforming an i7 2600k with its 4 physical and 4 virtual cores.

    As long as an application uses more than 4 threads affectively, there will always be a performance increase.Obviously its gonna be waste for a HT CPU or even an 8 physical core CPU. . Be it or be it not a 30% gain, an improvement is an improvement. You cant say its a waste because there are games and many apps that use the so called "fake Cores".
     
  20. ElementalDragon

    ElementalDragon Ancient Guru

    Messages:
    9,351
    Likes Received:
    30
    GPU:
    NVidia RTX 4090 FE
    .... i think Xtreme just kinda made the entire point against the 2600K.

    Yep. Let's spend $300+ on a CPU and $500+ on a GPU, and play at 1680x1050 or lower. (which is about the only point in games where you see more than a barely negligible bump in framerate).

    Dude.... you're right.... i'm sure NOBODY will chime in saying they wish they'd not spent more for the 2600K. You know why? CAUSE THEY HAVE IT! They decided to go for the top-o-the-line. as you'd say.... "power to them". Do they know which one definitely performs better than the other? maybe... maybe not. But like a logical person, a lot of us look at those things called Benchmarks.... tests performed in a specific environment with the only changes being the actual piece of hardware being tested. What do we find when they are looked at? The same thing that's being discussed here. Yes... the 2600K with it's additional pseudo-cores has the POTENTIAL to be "up to 30% faster".... but whether it's with encoding software... gaming.... it all depends on what software is being used. In some cases the 2600K edges out the lead by a margin, in most cases it seems about even. There's even the rare case where the 2500K performs slightly BETTER than the 2600K. And you can't exactly blame that on the video card, since no hardware but the CPU was changed.

    BlackZero: I didn't know "10% more fps" was such a hard concept to understand. Let me try to put this whole thing into an example you might understand.

    On one hand... you have a video card that costs $300... and according to benchmarks.... you'll be able to play your favorite game at your native resolution at... say... 60fps, and encode a BluRay rip in about 30 minutes.

    On the other hand... you have a video card that costs $400.... with specs that put it quite well ahead of the $300 card.... yet according to the same benchmarks, you'll play your favorite game at your native resolution.... at 66fps..... and encode the same movie in about 27 minutes.

    That's basically the point we're trying to get at. You keep spouting this "up to 30%" crap right from Intel's mouth..... yet, not only is it application based.... it's also situation based. would a 10% faster rip of a movie be worth an extra $100 if that only gives you 3 minutes? Hell, even 30% is only 9 minutes. And i don't know about you, but if i'm ripping a movie... i generally set it and forget it, coming back later after it's been well beyond finished.

    Long story short... the only time you're going to see that theoretical 30% performance increase be anywhere near worthwhile is if the tasks being performed are already incredibly long. And can't even comment on framerates, since if the framerate is so high that a 30% increase seems quite nice, it's already running quite quick.

    And no.... that 30% IS a theoretical number. If it were FACT, even you wouldn't have to use the phrase "if an application makes full use of the HT cores"..... which is funny in itself since they aren't actually cores.
     
    Last edited: Aug 2, 2011

Share This Page