Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 1, 2019.
Sweet......my rx590 can go to secondary system, and 390x to tertiary system.
I'm glad I'm not immediately looking to upgrade to 4K because this wait is becoming a bit tedious. I'm glad to know the price point ought to be better. Although I'm 75% sure I'm going to go with an AMD GPU next, I just hope they can make 4K@60Hz gaming more affordable.
Not sure where you're getting your intel from (Intel?) but I don't really understand the mentality people have of GCN's age having anything to do with its performance. x86 is ancient in comparison and yet it has constantly evolved to meet modern demands. It has come in a wide variety of flavors, despite retaining fundamental compatibility, which is why there has often been such a drastic difference in IPC among models throughout the years.
GCN is no different. It has constantly been tweaked to handle things differently, but I'm confident that all of the GCN GPUs are "binary compatible", at least to some degree. There's plenty they're doing right with the GCN architecture, it's just a lot of that doesn't translate very well into gaming.
For the record, Nvidia doesn't really do things a whole lot differently.
X86 is an instruction set, while GCN is an actual core. GCN has run its course imo, it works but looking at Vega and Polaris it's very power hungry.
I'm not sure what you're getting at. GPUs are nothing more than glorified highly-parallelized CPUs with their own instruction set. The AIBs are basically complete computers in and of themselves, except they don't have any direct user input controls.
Also, Vega is actually more efficient than some Nvidia counterparts for certain compute workloads, like OpenCL tasks. The architecture is fine, it's just poorly optimized for gaming.
There is a difference between an instruction set and the underlining hardware used to compute it. x86 is an instruction set - on modern processors it's converted into an optimized format before it actually hits the hardware that's doing the computing, GCN would be the hardware that's doing the computing (obviously not with x86) which hasn't really changed much in several generations. Just because GPUs are similar to CPUs doesn't invalidate the hierarchy/path of the components below it.
When people say "GCN has run it's course" they are saying the method in which AMD does it's computation for the past few generations, even including the minor changes, hasn't really much - we need to see more significant changes. Whether AMD makes those changes and continues to name it GCN is a different story but thus far we're operating on the basis that this just more minor changes from previous iterations.
This Power Hungriness depends on point of view. Since it is not only way to look at GPU.
Let's take RX-580 as example. Yes, it has quite more transistors than GTX 980 and even bigger difference to GTX 1060 while all 3 cards perform about same.
One can say that Polaris needs those transistors to deliver certain gaming performance. But that would be false assumption.
RX-580 has 1:1 FP16 to FP32 performance. While GTX 980 does not even have FP16 and GTX 1060 enables it only at 1:64 rate.
Then there is Radeon 7 with computational power matching RTX 2080Ti which not only costs more, but has 40.6% more transistors. And power draw difference is only 17% lower for RTX 2080Ti. That's not that much, especially if one tweaks Radeon 7. (But I have to admit that I do not know and have not seen power draw comparison of those 2 cards under different compute loads. As power draw there is different from gaming.)
This shows that AMD's beefy and hungry compute was just step ahead from nVidia who is now in same spot with beefy GPUs.
I understand all of that. That wasn't my point. My point is just because an architecture is old, that doesn't mean it can't be optimized to meet modern demands. Now, whether AMD is actually doing that is a different thing altogether. So far, they haven't really done much to optimize GCN. From the GCN 1.0 to Radeon VII, there aren't substantial IPC differences. There are many other differences, but not so much in something like DirectX rendering. None of that states it can't be done though.
But, going back to my original statement to Evildead666, we don't yet have any substantial info on whether AMD has actually optimized the architecture in Navi (ideally, for gaming purposes). So, to say that GCN has run it's course or is showing its age is a moot point for the same reason why you can't say that about x86 or really any other architecture. Of course, there are some exceptions to this such as MIPS, which is meant to be kept simple. And of course, some architectures would require too much restructuring to be worth the investment, which is probably why AMD ditched Terascale, why Intel ditched Itanium, or why Nvidia ditched Tesla (the architecture, not the GPU models).
Navi and Vega have been in Dev for quite some time, yes.
It could be that Navi is based on Polaris, yes.
It could be a big tweak of it.
I hear Vega64 +15%, i also believe that will be clocks almost only. Thats just me.
This is the last of this generation for AMD. I Look at it as Ryzen to Bulldozer, as Navi to Vega/Polaris.
Don't read too much into what I said, as I didn't say much.
If they follow their last trends on power consumption, the highest performing cards will be just over the sweet spot, very close to the absolute limits allowed by the silicon.
Going chiplets means they have a much different goal for the whole GPU chiplet. Less I/O worries I suppose.
They might keep a lot of the actual Shaders as they are, just beefed up. Who am I to know.
All i'm saying, is that Navi isn't the Second coming, thats the next one (supposedly)
With Fish and Chiplets.
This was being said about Navi, when Vega came out.
I think AMD has evolved GCn a huge amount, especially with efficiency, that was one of the main goals of Polaris, maybe even from Fury.
I think their main problem, was that Nvidia's cards were just so good, AMD had to clock them too high, just to stay in the game. I think they were planning to go with lower clocks and lower voltages. But that didnt work out in the end.
If there was a Major 4096 shader limit in GCN, i think patching it would be worse than doing a full new design. I believe Navi is the last of this Gen.
I wont be upgrading until ive seen Ryzen 3000 benches anyway, and not the GPU until a real 2x Vega64 (rendering power wise) or more comes from AMD.
Just saying, Navi isnt probably going to change the world in a generational way.
There are three or four gens going on at the same time, in different stages of dev.
Information can get misinterpreted, or just be plain wrong, or seeded bad news.
It changes as time goes on.
Navi will have changes, some maybe quite fundamental. Its not going to be the 2080Ti killer though, is it ?
I'm not saying it won't have changes, and i hope it does well, i'm simply stating that before Vega came out, i remember hearing about "navi" and how it's going to change everything. Then Vega came out, and i remember hearing again: Wait for navi. Now Navi is about to be released, and i'm hearing "Navi won't be a game changer, it's what's after navi"
If this trend continues, it'll always be whatever "comes next" to wait for.
In computers, its always what comes next. Well, as soon as you find out enough about whats coming now-ish.
True, i guess my point is that AMD has been doing low to main stream GPUs for awhile now and i constantly hear people for the past many years saying "Just wait till navi, then they will be competing high end again!" and now i'm hearing "Wait till after navi, then they will be competing in high end again!"
It's saddening, because i want AMD to compete in high end, now.....lol
If Navi will be priced lower it will be in the general tradition of launching first a high end product at top dollars - in their case Radeon VII - and then lower performance products cheaper of course??? AMD did not revealed anything but I think that in this logic Navi will be low end compared to Radeon VII. Everybody was under the impression that all is rosy for AMD but after they published first-quarter earnings there was a big surprise because the company missed profit estimates so I think they will play safe with Navi in their well known traditional waiting game: "Wait for the Fury X! Wait for Polaris! Wait for Vega! Wait for …Waiting isn't much fun AMD! - I payed the leather jacket man a lot of money because of your waiting game. I honestly think that graphic card market will still be dominated by nvidia at least until amd next architecture (wait for it...). This "7nm" architecture of theirs is flawed anyway - Radeon VII is hotter and more power hungry than 12 nm RTX 2080...
Navi is meant to make all these gtx turing cards obsolete (they didnt last long did they). Amd can hold everything but super high end, nvidia can hold that for those who are loaded and green in heart.
Rummored perfomance is up to 2070/2080 for much less and some of you are still complaining.
Yes but that's 7nm Navi vs 12nm Turing.
What happens when Nvidia releases 7nm next-gen cards as well? I mean one of the reasons the turing cards are so expensive is they are so large on their 12nm process
If it literally is a 7nm turing, no architecture change, performance goes up, wattage goes down, where does navi stand?
Obviously we don't know what navi will perform and at what price points, but IF AMD makes the GTX turing cards "obsolete", as you say, i don't see that lasting long, you can bet nvidia is already preparing their 7nm GPUs for a release date of who knows when, they may even be delaying it, as they don't have to release it currently, and as they stated earlier they are trying to get rid of their excess inventory due to crypto market.
My bet is: Navi is, and equals, to bests nvidias current offerings $300-400 and below, and within 6 months after the release of navi, they will no longer hold that crown.
As much as i want AMD to succeed and bring real competition to nvidia, if they are only going to match them on performance with 7nm on nvidias 12nm, they're making it too easy for nvidia to react.
We dont know anything about 7nm nvidia cards, they can be a year away as far we know. Maybe they waiting for Navi and upcoming Intel gpu to actually show us something.
Btw, nvidia gtx turing 12nm is competing with amd 14nm polaris as right now and beside lower power consumtion it does not look that impressive.
i give nvidia till september before they get 7nm turings out