Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 28, 2019.
Intel is a dead horse.
Tell your friend to get his sh** together and check again because there's probably something wrong with his tests
That actually shocks me here - but I won't argue to say it's wrong (I've seen some of this occasionally on benchmarks).
Maybe once they get that much invested into things you'd think they'd go 4k, so they could actually put the GPU to use? To be such CPU bound (likely a DX11 title limited on draw calls), sounds like the game developer's modeler(s) didn't connect all their UV's they could have, and did a slop-job. Some games are losing 10~12fps between the two processors, but it's not every game though. In a perfect world, the cpu would only have to do AI, physics, and mild servings of draw calls (fed to GPU by CPU to render UV's or textured faces on 3d models, one texture at a time, one model at a time).
i9 9900k-anything + 2070 VS r7 3700x + 2080 = roughly the same cash investment*, close to same fps averaged. I would say trashed is a strong word for that, not necessarily the wrong word though, I'd more describe that as "it all works out in the end if you don't use the GPU fully in some titles but do in others". As in my case, the build and use of it differs by person, it just depends how good they are at judging needs and what the budget is.
I actually almost purchased a 2080, but the 2070 super was so close in performance VS the price jump to the 2080. It really depends how the game-engine is optimized and programmed, how many state changes between frames, how much AI and physics are going, if the game is script-heavy, if it's DX11 (or lower), or DX12/Vulkan (less likely for cpu bound due to multi-threaded draw call feeds, which REALLY helps open-world scene creation).
*Note: does not take into account the following:
Differences in TDP or cooling requirements,
case upgrades required for additional airflow or radiator fitment,
RAM speed increase accruing additional cost - though in optimal scenarios both machines would have 3600mhz CL14~CL16 RAM, etc.
I'd reckon nobody lost out on either of those machines; one just poured more into the GPU and one poured more into the CPU. If the CPU holds back your GPU, ask more from it by increasing the resolution. Conversely if the GPU is holding things back, turn down / turn off a setting or two.
That being said, there's always a market for 'the fastest game machine' as there is a market for 'the rendering powerhouse' or 'well priced content creation configuration' as I use here.
One thing's for sure. My days of overclocking trying to get an additional 10% out of the machine, and spending 100+ on cooling and an extra 50~100$ on a fancier motherboard are over, but this doesn't hold true for everyone.
Whoever mentioned the D-stepping 920 OC's above is entirely right. That and the Wolfdale Pentium dual-core (late socket 775 budget processors), and the Westmere (?) XEON 5xxx series (much of it) were some SCREAMING overclockers back in the days 10 years ago. Miss those days but not enough to re-live it.
Il wait till intel are on 10nm and have hopefully a decent 8 or 10 core cpu
I will buy when I see 6ghz clock-speed and 20 cores / 40 threads,It has to Turbo on all cores not just one,If I wanted a dual core I'd buy the G3258 Intel!
Hope you guys know iam just messing around, Those are some impressive clock speeds would like to see Amd match them on that field possibly.
It's a good resume of the situation
Anyway one day it is Intel other day it is AMD etc etc;
It's like that since the AMD 5x86 (1995/6 lol)
That was related to 4GB version due to test exceeding it. 8GB version had practically double performance even on PCIe 3.0 x8.
Since user would not see difference between those maximum and high textures on used resolution (1080p), he would reduce them and be OK even on 4GB version.
Not that I blame AMD for making low end card with only PCIe x8. Or intel for not bringing PCIe 4.0. Each has their own reasons.
But it is nice that we finally have some good example of 4GB VRAM not being enough. And effect of PCIe bandwidth in such situation.
I said it many times before. AMD's production capacity would not make sufficient dent to intel's sales even if AMD sold immediately every CPU they make.
And AMD's chips are available worldwide in good quantities.
Intel will have reason to be afraid when AMD makes same amount of CPUs as today or more, but their chips are out-of-stock. Because that will signal unknown level of demand for AMD. And potentially loss of sales for intel in dozens of percent due to people waiting for AMD's chips instead.
5.3 GHz on single core, under 50°C and under max TDP.. for how much milliseconds this could happen on a high-eng cooling system?
I think Intel's Marketing Team has already been playing this game judging from the cluster fk that is 10th Gen naming scheme. It's a total mess. You have i3's with and without hyper threading, you have i5's with and without hyper threading, you have i7's with and without hyper threading and all have varying core counts the naming schemes are all over the place. It's like it was created by 3 year olds.
nah more like someone flipped the rug under them and they're still picking up the mess
not above 1080p.
and above 120 will only make a difference for (online) pvp (not interested), nor do i see any difference in perf for my 2080.
when compared in 3DMark with 9900K+2080, im getting into the top 30 (out of the +220 listed).
so my 2080 works as a 2080, even with an R5 3600.
Problem here isn't the marketing team, they work with what they are given. Instead the issue is at the core - their development team and the decision makers.
I am very curious to see how Intel will manage this. With the market share they have, AMD hurts them, but barely. The only thing which can change that is that if for another 1-2 years AMD will continue to hurt them meanwhile offering a far better product at not only consumer market, but especially OEM and Enterprise. That is where Intel is getting most of their income from. Epyc right now is a fantastic product, and what ever may come after it most likely will continue the trend.
Admittingly it is impressive what they have managed to squeeze out of their 14 nm process, but it is at its limit. Hence, why they try their best to increase the product stack with basically the same product. Hilarious, but here they are.
sure, because you doing a synthetic gpu dedicated test, on real world games your rtx 2080 performs like a rtx 2070
AMD, improving and gaining steady.
Intel, lets OC further and sell as new again.
Where do you draw this information from, and in which domain are we talking?
In the home or small office, the general information I found was that often the the performance dial leaned in AMD's favour due to their processors often having more cores.
Intel EPT is just a copy of AMD NPT/RVI for SLAT. AMD support interrupt virtualization as well (even on Windows, though KB4490481 came a little too late). You are just spreading FUD.
try do any test even on simple virtualization platform like vmware or virtualbox and i want see you say that again
because it takes away the impact of different hardware (ram etc)/settings,
nor is it easy to recreate the same scene in actual game,
so that you don't see fps fluctuations just because of an additional explosions that wasnt there in a different run.
but hey, siege i guess isnt a real game (@1440p and vsync gets 75hz (screen max) with maxed settings incl TAA x4, and with fastsync i get steady +120fps).
nor have i seen any review on guru that shows more than 10fps (if) difference when going past 1080p on amd vs intel,
and most of the time the fps is already past whats needed.
at least the ppl i know that own anything above Nv xx70 (and supporting ecosystem)
play shooters at 1440p and up while are doing 120/144 on ryzen without problems.
and that incl a few that either make 6 digit income or have no problem spending whatever they want on hw,
and they all could have gotten the 9900, yet no one did.
because they realised they wont pay intel 30-50% more for the big gains of 5-10fps (at lower res).
When you compensate with more cores amd have advantage of course
which is given when looking at the same price point, not amds fault.