Guru3D.com Forums

Go Back   Guru3D.com Forums > Hardware > Processors and motherboards Intel
Processors and motherboards Intel Is that Core 2 or i7 giving you a hard time? Enter this forum, this is all about Intel baby.


View Poll Results: which is better For gaming ?
Core i7 2600K 34 31.78%
Core i5 2500K 73 68.22%
Voters: 107. You may not vote on this poll

Reply
 
Thread Tools Display Modes
Old
  (#51)
TwL
Maha Guru
 
TwL's Avatar
 
Videocard: 2x5850 2x6950 + 9800GTX
Processor: I7-980 + I5-2500K
Mainboard: UD7+ROG & P8Z68-V Deluxe
Memory: Elpida Hyper
Soundcard: RealTek
PSU: OCZ GXS 850w+1200w
Default 08-02-2011, 04:59 | posts: 1,828 | Location: Finland

well, just for honest opinion.

HT = crap idiotic design paper release marketing
Real cores = only thing that matters (including 1 Bulldozer module = 1 core)
Someone mentioned future? = 8-XX REAL cores not some faked crap that heats a single core/module up.

+ We should already have the technology which doesn't need such cooling. All of current world so called technology development is 10 years old already.
   
Reply With Quote
 
Old
  (#52)
sykozis
Ancient Guru
 
sykozis's Avatar
 
Videocard: eVGA GTX660SC SLI
Processor: Core i7 2600K
Mainboard: ASRock Z77 Extreme4
Memory: 8gb G.Skill DDR3-1866
Soundcard: Creative Recon3D PCIe
PSU: SeaSonic M12II 620 Bronze
Default 08-02-2011, 05:11 | posts: 16,063 | Location: US East Coast

Quote:
Originally Posted by BlackZero View Post
It's a well documented fact that ht cores when utilised add as much as 30% to performance, I also do a lot of video encoding so i don't need to justify my purchase from a gaming point of view but I don't see how you can deny the fact that ht on the 2600k when in full use does add upto 30% performance regardless of whether it's an application or game. It's also well known that the frostbite engine is etremely efficient at utilising ht cores.
The performance gains from using HyperThreading are very well known and proven to be application dependent. Also, not all games use the "frostbite engine", so using that as an example is a rather extreme stretch to justify buying an HT enabled processor. In fact, I can only find 8 games that do us it and 1 of those games only uses it for Multiplayer. Games such as World of Warcraft actually show a performance loss from HT, as do many other games from my understanding.

Quote:
Originally Posted by deltatux View Post
Many ISPs advertise up to speeds, but not many ISPs really go up to that speed especially for people using DSL.
lol, my ISP is the opposite. They claim "up to 15mbps"...and I've yet to see my connection get that slow. I regularly hit 20-22mbps...
   
Reply With Quote
Old
  (#53)
deltatux
Ancient Guru
 
deltatux's Avatar
 
Videocard: XFX Radeon HD 6870
Processor: Intel Core i5 3570K @4.5
Mainboard: GIGABYTE GA-Z77X-UD5H
Memory: Patriot 4 x 4GB DDR3-1600
Soundcard: Auzentech X-Raider 7.1
PSU: OCZ ModXStream Pro 500W
Default 08-02-2011, 05:13 | posts: 19,054 | Location: Toronto, Canada

Quote:
Originally Posted by sykozis View Post
lol, my ISP is the opposite. They claim "up to 15mbps"...and I've yet to see my connection get that slow. I regularly hit 20-22mbps...
lucky bastard, mine caps at my up to speed ... my ISP isn't bad since I always get the speed I paid for, I just hate my bandwidth cap.

anyways, lets get back on topic .

deltatux
   
Reply With Quote
Old
  (#54)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-02-2011, 05:16 | posts: 8,109 | Location: United Kingdom

Quote:
Originally Posted by deltatux View Post
Let's talk solid numbers now, that "30% performance increase" is exactly how many FPS faster? If it isn't more than 10 fps, it really isn't much to justify a $100 increase in price to be honest. In addition, that's an up to statement, how many games can demonstrate that up to number. Many ISPs advertise up to speeds, but not many ISPs really go up to that speed especially for people using DSL.
Isps not delivering the performance promised, so we're talking apples and pears now? the fact that you live a certain distance from an isp can not be changed, but you can change the applications you use on your pc a little more easier, and what isn't more than 10 fps? that 30% is not a theoritical number, it's a fact if an application makes full use of the HT cores then you will get 30% of the intital number, not sure what 10 fps has got to do with anything. The gtx 580 delivers around 20-30% more performance than a gtx 570, but only when you pair it with other components that can match it otherwise it's a waste aswell.


Quote:
Originally Posted by deltatux View Post
The extra 2 MB L3 cache hasn't really shown much of a performance difference even if you disable HT on the 2600K and compare it with the 2500K.

deltatux
It's there to aid the ht cores primarily, but if you use the correct application it will obviously perform better than a lesser cache. Just because some applications don't make use of it doesn't mean it's of no use to another.

Quote:
Originally Posted by sykozis View Post
not all games use the "frostbite engine", so using that as an example is a rather extreme stretch to justify buying an HT enabled processor.
I didn't say all games use the frostbite engine....it's an example of 'a' game that uses ht cores... don't know where you got the idea that i was justifying buying a 2600k based on a single game... I certainly didn't write it.

I also don't have a crystal ball that tells me exactly what 'future' games will be doing, but, I said that based on the fact that the most popular game currently available and the most eagerly awaited game is going to be using ht cores I would not be surprised if other games followed suit.

I, however, have more uses for my processor than just gaming, which I already wrote, but even from a gaming perspective if i'm going to buy a processor then I want it to last a few years so i don't see why I should not buy something that's closer to the top of the range and already provides better performance in a host of applications/some games and has the potential for further improvement in games rather than something which costs less and will provide less performance in many applications and has no potential future benefit.

Last edited by BlackZero; 08-02-2011 at 05:35.
   
Reply With Quote
 
Old
  (#55)
deltatux
Ancient Guru
 
deltatux's Avatar
 
Videocard: XFX Radeon HD 6870
Processor: Intel Core i5 3570K @4.5
Mainboard: GIGABYTE GA-Z77X-UD5H
Memory: Patriot 4 x 4GB DDR3-1600
Soundcard: Auzentech X-Raider 7.1
PSU: OCZ ModXStream Pro 500W
Default 08-02-2011, 05:34 | posts: 19,054 | Location: Toronto, Canada

Quote:
Originally Posted by BlackZero View Post
Isps not delivering the performance promised, so we're taking taking apples and pears now? teh fact that you live a certain distance from an isp can not be changed, but you can change the appications you use on your pc with a little easier, and what isn't more than 10 fps? that 30% is not a theoritical number, it's a fact if an application makes full use of the HT cores then you will get 30% of the intital number, not sure what 10 fps has got to with anything. The gtx 580 delivers around 20-30% more performance than a gtx 570, but only when you pair it with other components that can match it otherwise it's a waste aswell.
You do realize that there's nothing in the WIN32 API that allows an application to differentiates between a physical core and a SMT core except for a mere detection to see how many you have, but it can't really dedicate it to only "real" cores or SMT cores. So application programmers can't just simply program applications to run on SMT more efficiently. They can make them to use each thread efficiently, and that's about it.

Also, that up to 30% isn't guaranteed and most applications can't reach that "up to" number. Only a select few. At this point, you're basically gambling your $100 hoping that one of your application will give you that "up to" performance.

Quote:
Originally Posted by BlackZero View Post
It's there to aid the ht cores primarily, but if you use the correct application it will obviously perform better than a lesser cache. Just because some applications don't make use of it doesn't mean it's of no use to another.
Doesn't seem you understand how cache works do you? You can't make application use cache any better. Access to the caches are handled by the CPU, the best a software developer can do is make their applications more memory efficient so that the CPU may have more cache hits than misses. The cache hit/miss ratio is determined by how well the CPU's architecture can handle the cache and how it will compensate for cache misses. More cache doesn't necessarily make performance better. If you have more cache but that the CPU always have cache misses more than cache hits, then the extra cache size is basically wasted on a performance standpoint. However, if the CPU has more cache hits than misses, then the larger cache may help.

The cache on a CPU is basically to load instructions and data pre-emptively, having the logic window and scheduler predict what data is needed from main memory before it's actually needed. If the prediction is correct, it's considered a cache hit. If it's incorrect, then it's called a cache miss. All modern CPU architecture still have cache misses. It's nearly impossible for the CPU get the guesses correct 100% of the time.

By preloading data and instructions to the cache makes processing faster, but if there's a cache miss, the CPU will have to unload the data and load the correct data from main memory when the CPU needs it which slows down the core. This is why having more L3 cache doesn't necessarily mean better performance.

A larger L3 cache means that depending on how CPU caches data, you can cache a bit more just in case in theory, but in reality, it doesn't seem to improve performance. Unfortunately, cache uses SRAM which is very expensive. SMT doesn't really cost much money, SRAM does. Often CPU designers would lower the set-associativity when they are able to fit more cache into the CPU. The lower the set-associativity, the faster the cache and the predictions go but at the same time, more misprediction will happen. Thus the need for a larger amount of L3 cache. However, if the set-associativity is the same, in theory a larger cache would be better, but that has yet to be really realized.

deltatux
   
Reply With Quote
Old
  (#56)
Xtreme1979
Maha Guru
 
Xtreme1979's Avatar
 
Videocard: EVGA GTX 680 2GB O/C
Processor: 2600K 4.4-4.7gHZ 1.30v
Mainboard: MSI P67A-C43 B3
Memory: DDR3 Ripjaws Z 2133 4x4gb
Soundcard: X-Fi/Klipsch ProMedia 2.1
PSU: SeaSonic X650 Gold
Default 08-02-2011, 05:36 | posts: 1,256 | Location: Bay City, MI

Quote:
Originally Posted by ElementalDragon View Post
Not sure what thread you're reading, Xtreme, but it's not only my opinion. Just about everyone here also says that the 2600K isn't all that worthwhile.
Interesting that most of those people own AMD or 2500K chips like yourself. How many 2600K owners have chimed in stating their unhappy with their processor and wish they had invested their $100 differently?


Quote:
then you didn't post it cause it doesn't really help your arguement any.
The facts speak for themselves so there is really nothing to argue.

Quote:
2) i'd rather pay $250 for a processor and have it used to it's full potential then pay $350 to get the same result.
True then you can upgrade to the faster CPU when today's full potential is no longer enough. Power to you! Your choice just don't go around saying 8 threads aren't utilized (used for the layman) when it's clearly evident they are.

Quote:
3) the same general idea still applies. Why get something just because of what it's possibly capable of rather than what it's actually capable of? Why pay for 8 threads when it's barely using 4?
Sounds like the dual core preachers when quad cores were released. Benches and my own screenshot, which started this big waste of time, prove that 8 threads can be used when needed. Most of the software being released is multithreaded UTILIZING three or more cores with scaling, and that fact is only going to be amplified more moving forward.

I leave this post with my last piece of factual evidence of which CPU is better at gaming by posting a multithreaded gaming benchmark from our very own Hilbert:
http://www.guru3d.com/article/core-i...600k-review/21

When the CPU is stressed at a lower resolution, which quad core comes out on top? Meh maybe it's the 2mb extra L3 cache, or extra 100mhz.

Goodnight and peace out..
   
Reply With Quote
Old
  (#57)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-02-2011, 05:46 | posts: 8,109 | Location: United Kingdom

Quote:
Originally Posted by deltatux View Post
Doesn't seem you understand how cache works do you? You can't make application use cache any better. Access to the caches are handled by the CPU, the best a software developer can do is make their applications more memory efficient so that the CPU may have more cache hits than misses. The cache hit/miss ratio is determined by how well the CPU's architecture can handle the cache and how it will compensate for cache misses. More cache doesn't necessarily make performance better. If you have more cache but that the CPU always have cache misses more than cache hits, then the extra cache size is basically wasted on a performance standpoint. However, if the CPU has more cache hits than misses,
Why don't you come down from your know it all high horse and maybe consider that the engineers at intel might know a little more than you? all you just wrote is a bunch of might be or might not be, nothing of any significance yet you say you know it all.. so I could just as well say that 'if' the cpu makes full use of the cache 'then' the cpu performs better... hows that any different from what you just wrote? am I getting you right?

I think there isn't much more to be said here.. really

Last edited by BlackZero; 08-02-2011 at 05:52.
   
Reply With Quote
Old
  (#58)
deltatux
Ancient Guru
 
deltatux's Avatar
 
Videocard: XFX Radeon HD 6870
Processor: Intel Core i5 3570K @4.5
Mainboard: GIGABYTE GA-Z77X-UD5H
Memory: Patriot 4 x 4GB DDR3-1600
Soundcard: Auzentech X-Raider 7.1
PSU: OCZ ModXStream Pro 500W
Default 08-02-2011, 06:22 | posts: 19,054 | Location: Toronto, Canada

Quote:
Originally Posted by BlackZero View Post
Why don't you come down from your know it all high horse and maybe consider that the engineers at intel might know a little more than you? all you just wrote is a bunch of might be or might not be, nothing of any significance yet you say you know it all.. so I could just as well say that 'if' the cpu makes full use of the cache 'then' the cpu performs better... hows that any different from what you just wrote? am I getting you right?

I think there isn't much more to be said here.. really
All you've basically shown is that you can only get said performance, "if" you run this app, "then" you get the 30% increase. You haven't really shown me that across the board you will see said 30%.

That's the whole issue with SMT, it's all an "if". You never get a guaranteed performance increase and the rate of increase is all application based. It's all theoretical.

Maybe if you realize that SMT isn't as hyped as Intel wants you to think. I know for sure the Intel engineers know what they're doing. However, it's a different story when the marketing department wants to sell you the product and overblow it.

If you've taken basic hardware course in university, you'll understand basic things about caches and how CPUs work and wouldn't have made your statement of the size of the L3 cache helping SMT.

You still haven't explained how that $100 can be justified by the very circumstantial performance gains. On average, SMT and the extra 2 MB L3 cache give little to no performance increase to justify that $100. I've even used technical explanations based on basic hardware knowledge from like Hardware 101, but you still can't give me a reason except saying "Intel's smarter than you" or that applications just need to know how to use SMT efficiently. Well of course Intel engineers are smarter than me, but that doesn't justify getting the 2600K over the 2500K. In addition, that 30% increase that you keep professing maybe something as small as a 10 fps increase which is still miniscule if I have to pay $100 extra for it. I may as well spent the difference on a better GPU that gives me more than that 30% increase.

In addition, I also explained that you can't just make applications work better with SMT. There's no API facility that assists software developers to use SMT more efficiently, all they can do is to code for better thread management and better thread efficiency which would mean that it will scale as well with physical cores which means why bother with SMT? However, in reality, not many application can scale that well or perform so well to justify the cost difference between having SMT and no SMT.

Can you expect someone to justify spending $100 to get a theoretical increase that's application dependent and that most application cannot utilize at all? If the price difference between 2500K and 2600K was say $50 or less, then I would say 2600K is the better bet but the price difference now is $100.

Having real cores is always infinitely better than spending on SMT.

deltatux

Last edited by deltatux; 08-02-2011 at 06:26.
   
Reply With Quote
Old
  (#59)
Agent-A01
Ancient Guru
 
Agent-A01's Avatar
 
Videocard: GTX Titan H20 1398/7600
Processor: i7 3770K@5Ghz HT H20
Mainboard: Asus P8Z77-WS
Memory: G.Skill 8GBx2 2400
Soundcard: Xonar Phoebus-PC360/HD598
PSU: SeaSonic Platinum-1000
Default 08-02-2011, 07:22 | posts: 6,194 | Location: USA

Deltatux, i think you are going to be majorly dissappointed when you see amd 8core bulldozers NOT outperforming an i7 2600k with its 4 physical and 4 virtual cores.

As long as an application uses more than 4 threads affectively, there will always be a performance increase.Obviously its gonna be waste for a HT CPU or even an 8 physical core CPU. . Be it or be it not a 30% gain, an improvement is an improvement. You cant say its a waste because there are games and many apps that use the so called "fake Cores".
   
Reply With Quote
Old
  (#60)
ElementalDragon
Ancient Guru
 
ElementalDragon's Avatar
 
Videocard: eVGA GeForce GTX 760 ACX
Processor: Core i5 4670K
Mainboard: Asus Z87I-Deluxe
Memory: 16GB G.Skill RipJawsX
Soundcard: on-board
PSU: Seasonic 650W
Default 08-02-2011, 07:24 | posts: 8,588 | Location: Pennsylvania, USA

.... i think Xtreme just kinda made the entire point against the 2600K.

Quote:
When the CPU is stressed at a lower resolution, which quad core comes out on top? Meh maybe it's the 2mb extra L3 cache, or extra 100mhz.
Yep. Let's spend $300+ on a CPU and $500+ on a GPU, and play at 1680x1050 or lower. (which is about the only point in games where you see more than a barely negligible bump in framerate).

Dude.... you're right.... i'm sure NOBODY will chime in saying they wish they'd not spent more for the 2600K. You know why? CAUSE THEY HAVE IT! They decided to go for the top-o-the-line. as you'd say.... "power to them". Do they know which one definitely performs better than the other? maybe... maybe not. But like a logical person, a lot of us look at those things called Benchmarks.... tests performed in a specific environment with the only changes being the actual piece of hardware being tested. What do we find when they are looked at? The same thing that's being discussed here. Yes... the 2600K with it's additional pseudo-cores has the POTENTIAL to be "up to 30% faster".... but whether it's with encoding software... gaming.... it all depends on what software is being used. In some cases the 2600K edges out the lead by a margin, in most cases it seems about even. There's even the rare case where the 2500K performs slightly BETTER than the 2600K. And you can't exactly blame that on the video card, since no hardware but the CPU was changed.

BlackZero: I didn't know "10% more fps" was such a hard concept to understand. Let me try to put this whole thing into an example you might understand.

On one hand... you have a video card that costs $300... and according to benchmarks.... you'll be able to play your favorite game at your native resolution at... say... 60fps, and encode a BluRay rip in about 30 minutes.

On the other hand... you have a video card that costs $400.... with specs that put it quite well ahead of the $300 card.... yet according to the same benchmarks, you'll play your favorite game at your native resolution.... at 66fps..... and encode the same movie in about 27 minutes.

That's basically the point we're trying to get at. You keep spouting this "up to 30%" crap right from Intel's mouth..... yet, not only is it application based.... it's also situation based. would a 10% faster rip of a movie be worth an extra $100 if that only gives you 3 minutes? Hell, even 30% is only 9 minutes. And i don't know about you, but if i'm ripping a movie... i generally set it and forget it, coming back later after it's been well beyond finished.

Long story short... the only time you're going to see that theoretical 30% performance increase be anywhere near worthwhile is if the tasks being performed are already incredibly long. And can't even comment on framerates, since if the framerate is so high that a 30% increase seems quite nice, it's already running quite quick.

And no.... that 30% IS a theoretical number. If it were FACT, even you wouldn't have to use the phrase "if an application makes full use of the HT cores"..... which is funny in itself since they aren't actually cores.

Last edited by ElementalDragon; 08-02-2011 at 07:26.
   
Reply With Quote
 
Old
  (#61)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-02-2011, 07:39 | posts: 8,109 | Location: United Kingdom

Quote:
Originally Posted by deltatux View Post
All you've basically shown is that you can only get said performance, "if" you run this app, "then" you get the 30% increase. You haven't really shown me that across the board you will see said 30%.

That's the whole issue with SMT, it's all an "if". You never get a guaranteed performance increase and the rate of increase is all application based. It's all theoretical.

Maybe if you realize that SMT isn't as hyped as Intel wants you to think. I know for sure the Intel engineers know what they're doing. However, it's a different story when the marketing department wants to sell you the product and overblow it.

If you've taken basic hardware course in university, you'll understand basic things about caches and how CPUs work and wouldn't have made your statement of the size of the L3 cache helping SMT.

You still haven't explained how that $100 can be justified by the very circumstantial performance gains. On average, SMT and the extra 2 MB L3 cache give little to no performance increase to justify that $100. I've even used technical explanations based on basic hardware knowledge from like Hardware 101, but you still can't give me a reason except saying "Intel's smarter than you" or that applications just need to know how to use SMT efficiently. Well of course Intel engineers are smarter than me, but that doesn't justify getting the 2600K over the 2500K. In addition, that 30% increase that you keep professing maybe something as small as a 10 fps increase which is still miniscule if I have to pay $100 extra for it. I may as well spent the difference on a better GPU that gives me more than that 30% increase.

In addition, I also explained that you can't just make applications work better with SMT. There's no API facility that assists software developers to use SMT more efficiently, all they can do is to code for better thread management and better thread efficiency which would mean that it will scale as well with physical cores which means why bother with SMT? However, in reality, not many application can scale that well or perform so well to justify the cost difference between having SMT and no SMT.

Can you expect someone to justify spending $100 to get a theoretical increase that's application dependent and that most application cannot utilize at all? If the price difference between 2500K and 2600K was say $50 or less, then I would say 2600K is the better bet but the price difference now is $100.

Having real cores is always infinitely better than spending on SMT.

deltatux
All I see is you trying to justify why someone who can shouldn't buy a 'better' cpu... as for caches.. threads compete for cache usage the more cache you have the more efficient the operation... is that basic enough?

Quote:
Originally Posted by ElementalDragon View Post
.... i think Xtreme just kinda made the entire point against the 2600K.



Yep. Let's spend $300+ on a CPU and $500+ on a GPU, and play at 1680x1050 or lower. (which is about the only point in games where you see more than a barely negligible bump in framerate).

Dude.... you're right.... i'm sure NOBODY will chime in saying they wish they'd not spent more for the 2600K. You know why? CAUSE THEY HAVE IT! They decided to go for the top-o-the-line. as you'd say.... "power to them". Do they know which one definitely performs better than the other? maybe... maybe not. But like a logical person, a lot of us look at those things called Benchmarks.... tests performed in a specific environment with the only changes being the actual piece of hardware being tested. What do we find when they are looked at? The same thing that's being discussed here. Yes... the 2600K with it's additional pseudo-cores has the POTENTIAL to be "up to 30% faster".... but whether it's with encoding software... gaming.... it all depends on what software is being used. In some cases the 2600K edges out the lead by a margin, in most cases it seems about even. There's even the rare case where the 2500K performs slightly BETTER than the 2600K. And you can't exactly blame that on the video card, since no hardware but the CPU was changed.

BlackZero: I didn't know "10% more fps" was such a hard concept to understand. Let me try to put this whole thing into an example you might understand.

On one hand... you have a video card that costs $300... and according to benchmarks.... you'll be able to play your favorite game at your native resolution at... say... 60fps, and encode a BluRay rip in about 30 minutes.

On the other hand... you have a video card that costs $400.... with specs that put it quite well ahead of the $300 card.... yet according to the same benchmarks, you'll play your favorite game at your native resolution.... at 66fps..... and encode the same movie in about 27 minutes.

That's basically the point we're trying to get at. You keep spouting this "up to 30%" crap right from Intel's mouth..... yet, not only is it application based.... it's also situation based. would a 10% faster rip of a movie be worth an extra $100 if that only gives you 3 minutes? Hell, even 30% is only 9 minutes. And i don't know about you, but if i'm ripping a movie... i generally set it and forget it, coming back later after it's been well beyond finished.

Long story short... the only time you're going to see that theoretical 30% performance increase be anywhere near worthwhile is if the tasks being performed are already incredibly long. And can't even comment on framerates, since if the framerate is so high that a 30% increase seems quite nice, it's already running quite quick.

And no.... that 30% IS a theoretical number. If it were FACT, even you wouldn't have to use the phrase "if an application makes full use of the HT cores"..... which is funny in itself since they aren't actually cores.
I don't see the need to go into much detail but what's so difficult about percentages that people must use hypothetical numbers like 10 fps?

lol, most games don't even use 4 cores so why buy a 2500k why not a dual core?

I think this discussion has run it's length

ps. go check the folding at home section, the 2600k scores more than 30% over the 2500k on average, is that theory as well?

Last edited by BlackZero; 08-02-2011 at 16:07.
   
Reply With Quote
Old
  (#62)
Agent-A01
Ancient Guru
 
Agent-A01's Avatar
 
Videocard: GTX Titan H20 1398/7600
Processor: i7 3770K@5Ghz HT H20
Mainboard: Asus P8Z77-WS
Memory: G.Skill 8GBx2 2400
Soundcard: Xonar Phoebus-PC360/HD598
PSU: SeaSonic Platinum-1000
Default 08-02-2011, 07:44 | posts: 6,194 | Location: USA

I think i5 users and amd users are against the 2600k, as another poster said .

On one note, i just ordered the 2600k an hour ago
   
Reply With Quote
Old
  (#63)
Exodite
Maha Guru
 
Videocard: Gigabyte 6950 2GB @ Stock
Processor: Intel i7 2600K @ 1.15V
Mainboard: ASUS P8P67 Pro B3
Memory: 8GB Kingston PC10600
Soundcard: Realtek 892
PSU: Seasonic SS-460FL
Default 08-02-2011, 08:03 | posts: 1,652 | Location: LuleŚ, Sweden

Quote:
Originally Posted by BlackZero View Post
Please explain, do you mean because the cores are not fully loaded or are you saying even if they were fully loaded there would be no performance differencial?
The latter.

HT only provides an actual benefit to a subset of tasks, of which I've to see any game qualify. And even then that extra performance comes at reduced per-core performance due to the resources being shared.

If one wanted to make sure whether or not HT had an impact on performance one would have to run the same set of game benchmarks with HT on and off and compare the FPS, not merely assume there's a benefit because all virtual cores are being loaded.

In addition I don't know how load on a virtual core in calculated, since a physical core is always under load as soon as one of its two virtual cores are.
Quote:
Originally Posted by TwL View Post
HT = crap idiotic design paper release marketing
That's only true if you don't use any applications that make use of it.

For me personally the 2600K was definitely worth it, though that likely isn't true for the general consumer. HT does have its uses, gaming just isn't one of them.

Last edited by Exodite; 08-02-2011 at 08:08.
   
Reply With Quote
Old
  (#64)
deltatux
Ancient Guru
 
deltatux's Avatar
 
Videocard: XFX Radeon HD 6870
Processor: Intel Core i5 3570K @4.5
Mainboard: GIGABYTE GA-Z77X-UD5H
Memory: Patriot 4 x 4GB DDR3-1600
Soundcard: Auzentech X-Raider 7.1
PSU: OCZ ModXStream Pro 500W
Default 08-03-2011, 01:48 | posts: 19,054 | Location: Toronto, Canada

Quote:
Originally Posted by Agent-A01 View Post
Deltatux, i think you are going to be majorly dissappointed when you see amd 8core bulldozers NOT outperforming an i7 2600k with its 4 physical and 4 virtual cores.

As long as an application uses more than 4 threads affectively, there will always be a performance increase.Obviously its gonna be waste for a HT CPU or even an 8 physical core CPU. . Be it or be it not a 30% gain, an improvement is an improvement. You cant say its a waste because there are games and many apps that use the so called "fake Cores".
All I've been saying is that the price difference for the 2600K isn't worth it for a highly dependent application performance. Personally I can't even really comment on Bulldozer's performance because on paper it looks great but I personally think there might not be real world gains with the architecture. I know it'll work better than SMT, but to say will it work better than Sandy Bridge? I really don't know. I'm hoping it does, but it wouldn't be surprising to me if all it does is to meet it since Bulldozer is essentially a quad core with 2 dedicated schedulers and integer units.

Quote:
Originally Posted by BlackZero View Post
All I see is you trying to justify why someone who can shouldn't buy a 'better' cpu... as for caches.. threads compete for cache usage the more cache you have the more efficient the operation... is that basic enough?

I don't see the need to go into much detail but what's so difficult about percentages that people must use hypothetical numbers like 10 fps?

lol, most games don't even use 4 cores so why buy a 2500k why not a dual core?

I think this discussion has run it's length

ps. go check the folding at home section, the 2600k scores more than 30% over the 2500k on average, is that theory as well?
We're talking about gaming, that has been the whole issue here "2500K or 2600K for gaming?" is the topic, that's why I've been adamant about the framerates while you've been either saying "Intel's engineers are smarter than you" or throwing 30% everywhere when you won't see that across the board. By throwing in Folding@Home, that wouldn't really help your argument as it's not a game either ... and if you're the OP and may be strapped of cash, which would you rather take, $100 off or 30% faster folding? Folding can't justify the cost tbh because it's not productive to the owner of the 2600K but only to the F@H project. Your GPU will fold way better than that 30% faster. You can always spend that $100 on a better GPU than to get the 2600K.

Lastly, like I've been saying this whole time, the 30% increase is application dependent, you won't always see that 30%, most of the time you see no performance increase or sometimes a performance decrease. It can't justify the cost unless the OP is using applications that is known to take full advantage of HT.

deltatux
   
Reply With Quote
Old
  (#65)
Sever
Ancient Guru
 
Sever's Avatar
 
Videocard: Galaxy 3GB 660TI
Processor: i7 2600k - XSPC Raystorm
Mainboard: Asrock Z77 Extreme9
Memory: 16gb Corsair Vengeance
Soundcard: Asus Xonar D2X
PSU: Silverstone Gold 1200w
Default 08-03-2011, 02:53 | posts: 4,826 | Location: Land of the Great Downunder

Quote:
Originally Posted by deltatux View Post
EDIT: Is it me that only 2600K owners are the ones that are backing the 2600K just solely based on the fact that it has SMT? lol.

deltatux
lol, im a 2600k owner that isnt really backing it up because of HT. tbh, i dont really back it up at all. for most users, a 2500k will perform more or less similar.

Quote:
Originally Posted by sykozis View Post
The performance gains from using HyperThreading are very well known and proven to be application dependent. Also, not all games use the "frostbite engine", so using that as an example is a rather extreme stretch to justify buying an HT enabled processor. In fact, I can only find 8 games that do us it and 1 of those games only uses it for Multiplayer. Games such as World of Warcraft actually show a performance loss from HT, as do many other games from my understanding.
the only app ive really gained performance with HT is 3dmark vantage and 3dmark11. in vantage, gained about 50% on cpu score. if you calculate backwards, it ends up meaning that when HT is running, each logical core is 30% slower, but overall, the cpu is faster. but, these are benchmarks, so its really a bit of a useless gain for a gamer like me. in handbrake, there is very little to gain from HT even when all cores are stressed, (like maybe an improvement of 10-20fps encoding rate when youre already encoding at 300-400fps). so i'd say theres no real benefit.

but with regards to the games showing performance loss from HT, that only really occurred with the first gen i7s, because with those, if you switched on HT, it will remain in permanent HT mode. whereas with my second gen i7, if my PC use doesnt demand the use of more than 4 cores, it will just behave as if HT is switched off, and just run 4 full cores instead of 4 hyperthreaded cores. i cant really explain it too well, but this is just from my observations. but the long story short is that anything that runs on less that 4 cores will no longer experience a notable performance loss from leaving HT on (there was a thread about this with actual solid framerates recorded).

and as for the other argument about cache, the 2mb doesnt really give the i7 an edge in much of anything. from my experimenting, if you turn HT on and run the i7 at the same clock speed as the i5, you get more or less the same performance. its nothing gamebreaking. delta understands more about cache and SMT than i do so i'll leave all the complex stuff to him.

but more or less, at the moment, there is little to gain from HT in gaming. not enough games use the frostbite engine for that to be relevant to a lot of gamers.
   
Reply With Quote
Old
  (#66)
sykozis
Ancient Guru
 
sykozis's Avatar
 
Videocard: eVGA GTX660SC SLI
Processor: Core i7 2600K
Mainboard: ASRock Z77 Extreme4
Memory: 8gb G.Skill DDR3-1866
Soundcard: Creative Recon3D PCIe
PSU: SeaSonic M12II 620 Bronze
Default 08-03-2011, 03:14 | posts: 16,063 | Location: US East Coast

Quote:
Originally Posted by Sever View Post
lol, im a 2600k owner that isnt really backing it up because of HT. tbh, i dont really back it up at all. for most users, a 2500k will perform more or less similar.



the only app ive really gained performance with HT is 3dmark vantage and 3dmark11. in vantage, gained about 50% on cpu score. if you calculate backwards, it ends up meaning that when HT is running, each logical core is 30% slower, but overall, the cpu is faster. but, these are benchmarks, so its really a bit of a useless gain for a gamer like me. in handbrake, there is very little to gain from HT even when all cores are stressed, (like maybe an improvement of 10-20fps encoding rate when youre already encoding at 300-400fps). so i'd say theres no real benefit.

but with regards to the games showing performance loss from HT, that only really occurred with the first gen i7s, because with those, if you switched on HT, it will remain in permanent HT mode. whereas with my second gen i7, if my PC use doesnt demand the use of more than 4 cores, it will just behave as if HT is switched off, and just run 4 full cores instead of 4 hyperthreaded cores. i cant really explain it too well, but this is just from my observations. but the long story short is that anything that runs on less that 4 cores will no longer experience a notable performance loss from leaving HT on (there was a thread about this with actual solid framerates recorded).

and as for the other argument about cache, the 2mb doesnt really give the i7 an edge in much of anything. from my experimenting, if you turn HT on and run the i7 at the same clock speed as the i5, you get more or less the same performance. its nothing gamebreaking. delta understands more about cache and SMT than i do so i'll leave all the complex stuff to him.

but more or less, at the moment, there is little to gain from HT in gaming. not enough games use the frostbite engine for that to be relevant to a lot of gamers.
The reason I use World of Warcraft as a reference is due to the fact that it can actually run 8 threads. This is an observation I've made. As per Blizzard's devs and through my own personal experience, if WoW is permitted to run 8 threads, performance is actually worse than running only 4 threads on the 4 physical cores with upwards of roughly a 25% performance loss (100fps with 8 threads vs 120fps with 4 threads on 4 physical cores). I have an attribute set in the config file for WoW to force it to run on the 4 physical cores, which reduces the performance loss from having HT enabled. Blizz decided with a previous patch that WoW should be able to make use of all available cores (including HT)...so, the more cores available, the more threads it can/will run. If HT is enabled on an i7 2600K, it will incur the same performance penalty from HT as my i7 870 does, for the same reason.
   
Reply With Quote
Old
  (#67)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-03-2011, 07:32 | posts: 8,109 | Location: United Kingdom

Interesting quote from hardocp:

"We have kept the first Lost Planet benchmark around simply because it is one of the best scaling gaming benchmarks in terms of threading. It does a tremendous job up to the 8-core mark and even beyond.

The one big thing that sticks out to me here is how the stock 2600K comes in almost neck-in-neck with the overclocked 2500K. "

http://www.hardocp.com/article/2011/...ssors_review/4



And I think a poster mentioned handbrake, according to hilbert the 2600k performs exactly 32.6% faster than a 2500k.

http://www.guru3d.com/article/core-i...600k-review/15


@deltatux

Seeing how incoherent some of the arguments have gotten, especially considering that I've already answered almost everything regarding what I thought about gaming on a 2600k and completely seperated it from my views on applications i don't see why I need to go around in circles when people can simply read what was said. Also fyi it would take 2-3 gtx 580's to score as highly as a 2600k running big wu's not to mention the enormous increase in power usage and i don't see where I related f@h to gaming, it was a reply to another poster's comments, but was clearly taken out of context.

Last edited by BlackZero; 08-03-2011 at 07:37.
   
Reply With Quote
Old
  (#68)
Sever
Ancient Guru
 
Sever's Avatar
 
Videocard: Galaxy 3GB 660TI
Processor: i7 2600k - XSPC Raystorm
Mainboard: Asrock Z77 Extreme9
Memory: 16gb Corsair Vengeance
Soundcard: Asus Xonar D2X
PSU: Silverstone Gold 1200w
Default 08-03-2011, 18:07 | posts: 4,826 | Location: Land of the Great Downunder

Quote:
Originally Posted by BlackZero View Post
Interesting quote from hardocp:

"We have kept the first Lost Planet benchmark around simply because it is one of the best scaling gaming benchmarks in terms of threading. It does a tremendous job up to the 8-core mark and even beyond.

The one big thing that sticks out to me here is how the stock 2600K comes in almost neck-in-neck with the overclocked 2500K. "

http://www.hardocp.com/article/2011/...ssors_review/4



And I think a poster mentioned handbrake, according to hilbert the 2600k performs exactly 32.6% faster than a 2500k.

http://www.guru3d.com/article/core-i...600k-review/15


@deltatux

Seeing how incoherent some of the arguments have gotten, especially considering that I've already answered almost everything regarding what I thought about gaming on a 2600k and completely seperated it from my views on applications i don't see why I need to go around in circles when people can simply read what was said. Also fyi it would take 2-3 gtx 580's to score as highly as a 2600k running big wu's not to mention the enormous increase in power usage and i don't see where I related f@h to gaming, it was a reply to another poster's comments, but was clearly taken out of context.
if you look carefully at the benchmark for lost planet in reference to the comment you quoted... its benched at 640x480. sure, its a valid argument that the i7 2600k is faster at 640x480... but given that the OP has a gtx580, i highly doubt anyone is silly enough to buy a gtx580 to game at 640x480, so i dont think the OP cares which is faster at 640x480.

http://www.anandtech.com/show/4083/t...2100-tested/20

this would be a more useful comparison since its at a resolution that is closer to what the OP is probably gaming at. performance is similar.

i guess handbrake depends on what kind of file youre converting. but for me, i havent noticed any benefit in encoding from leaving HT on, so i dont see much of a benefit for choosing a 2600k over a 2500k.
   
Reply With Quote
Old
  (#69)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-03-2011, 18:26 | posts: 8,109 | Location: United Kingdom

Quote:
Originally Posted by Sever View Post
if you look carefully at the benchmark for lost planet in reference to the comment you quoted... its benched at 640x480. sure, its a valid argument that the i7 2600k is faster at 640x480... but given that the OP has a gtx580, i highly doubt anyone is silly enough to buy a gtx580 to game at 640x480, so i dont think the OP cares which is faster at 640x480.

http://www.anandtech.com/show/4083/t...2100-tested/20

this would be a more useful comparison since its at a resolution that is closer to what the OP is probably gaming at. performance is similar.

i guess handbrake depends on what kind of file youre converting. but for me, i havent noticed any benefit in encoding from leaving HT on, so i dont see much of a benefit for choosing a 2600k over a 2500k.
Before pulling out the old 640x480 is too low a resolution argument, perhaps it would be wise to consider that buying a processor is a 2-3 year investment for most people and considering how a processor compares to another at a lower resolution not only demonstrates the actual difference between two processors without the graphics card interfering, but also, and more importantly, determines the differences that can be expected with newer more powerful graphics cards which are yet to be released, afterall people upgrade graphics cards a lot more often than their cpu.
   
Reply With Quote
Old
  (#70)
Xtreme1979
Maha Guru
 
Xtreme1979's Avatar
 
Videocard: EVGA GTX 680 2GB O/C
Processor: 2600K 4.4-4.7gHZ 1.30v
Mainboard: MSI P67A-C43 B3
Memory: DDR3 Ripjaws Z 2133 4x4gb
Soundcard: X-Fi/Klipsch ProMedia 2.1
PSU: SeaSonic X650 Gold
Default 08-03-2011, 18:40 | posts: 1,256 | Location: Bay City, MI

Quote:
Originally Posted by Sever View Post
if you look carefully at the benchmark for lost planet in reference to the comment you quoted... its benched at 640x480. sure, its a valid argument that the i7 2600k is faster at 640x480... but given that the OP has a gtx580, i highly doubt anyone is silly enough to buy a gtx580 to game at 640x480, so i dont think the OP cares which is faster at 640x480.

http://www.anandtech.com/show/4083/t...2100-tested/20

this would be a more useful comparison since its at a resolution that is closer to what the OP is probably gaming at. performance is similar.

i guess handbrake depends on what kind of file youre converting. but for me, i havent noticed any benefit in encoding from leaving HT on, so i dont see much of a benefit for choosing a 2600k over a 2500k.
Am I the only who correlates that if CPU A performs better then CPU B at a CPU bound low resolution (that's why you bench CPU's at low res to take the GPU out of the equation) it will cont. to perform better than CPU B as games mature and become more demanding of the CPU, regardless of resolution? It's not rocket science! I am tired of people throwing out CPU benchmarks because uhh duhh, no one games at that resolution. It's not about current titles it's about the future of gaming moving forward, and which processor will be faster when needed. /Rant OFF

P.S. Well said BlackZero your post was quicker than mine.

Last edited by Xtreme1979; 08-03-2011 at 18:45.
   
Reply With Quote
Old
  (#71)
---TK---
Ancient Guru
 
---TK---'s Avatar
 
Videocard: 780Ti SLI/Qnix 2710 100Hz
Processor: 2600k 4.5Ghz HT On
Mainboard: Asus P8P67 Deluxe
Memory: RipJaws X 2x8GB 2133Mhz
Soundcard: Phoebus + DT880 Pro 250
PSU: Corsair AX 1200
Default 08-03-2011, 18:53 | posts: 17,825 | Location: New Jersey, USA

This argument is pointless. If you want a 2500k buy 1. If you want a 2600k buy 1. I happen to own both. Tbh the 2600k is only better for me benching. What I am seeing here is 2600k bashing by 2500k owners. Wanna be system builder advisors. Etc. Enjoy your chip whicheverone you have. and remember buy INTEL

Last edited by ---TK---; 08-03-2011 at 19:35.
   
Reply With Quote
Old
  (#72)
Xtreme1979
Maha Guru
 
Xtreme1979's Avatar
 
Videocard: EVGA GTX 680 2GB O/C
Processor: 2600K 4.4-4.7gHZ 1.30v
Mainboard: MSI P67A-C43 B3
Memory: DDR3 Ripjaws Z 2133 4x4gb
Soundcard: X-Fi/Klipsch ProMedia 2.1
PSU: SeaSonic X650 Gold
Default 08-03-2011, 19:39 | posts: 1,256 | Location: Bay City, MI

Quote:
Originally Posted by tommyk2005 View Post
and remember buy INTEL
Oh boy! You've gone and done it now. Hears distant AMD battle cries.
   
Reply With Quote
Old
  (#73)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-03-2011, 19:51 | posts: 8,109 | Location: United Kingdom

Quote:
Originally Posted by tommyk2005 View Post
and remember buy INTEL

Lol, yeah that somes it up nicely
   
Reply With Quote
Old
  (#74)
---TK---
Ancient Guru
 
---TK---'s Avatar
 
Videocard: 780Ti SLI/Qnix 2710 100Hz
Processor: 2600k 4.5Ghz HT On
Mainboard: Asus P8P67 Deluxe
Memory: RipJaws X 2x8GB 2133Mhz
Soundcard: Phoebus + DT880 Pro 250
PSU: Corsair AX 1200
Default 08-03-2011, 20:21 | posts: 17,825 | Location: New Jersey, USA

No need for Intel owner in fighting BZ remember who the enemy is...
   
Reply With Quote
Old
  (#75)
BlackZero
Ancient Guru
 
BlackZero's Avatar
 
Videocard: MSI 7970 OC
Processor: 2600K H2O
Mainboard: Asus P67 Pro
Memory: G.Skill 2133
Soundcard: X-Fi + 2400ES
PSU: Corsair AX850
Default 08-03-2011, 20:29 | posts: 8,109 | Location: United Kingdom

I hear ya lound and clear TK
   
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



Powered by vBulletin®
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
vBulletin Skin developed by: vBStyles.com
Copyright (c) 1995-2014, All Rights Reserved. The Guru of 3D, the Hardware Guru, and 3D Guru are trademarks owned by Hilbert Hagedoorn.