Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 27, 2019.
Except it isn't.
Nvidia: Mild performance improvements, first generation RTX, high prices
AMD: MIld performance improvements and better efficiency
Personally will be waiting until the next generation of Nvidia or AMD. Nothing this generation is all that enticing to me playing at 2k.
First off I was talking about gaming. Also, there are more powerful GPUs than the 2080 Ti for non-gaming workloads that are not limited by PCIe 3.0 with 16 lanes. Sure there could be edge cases. I will accept you providing me a good source showing real world cases where a GPU is limited by 16x PCIe 3.0.
Since they are revisiting the 5000 series of numbers again, are we also going to get a more modern bat mobile like shroud too?
AMD confirmed 225w TDP for the higher SKU. Nearly catches it up to the 2070's 175w (195w measured) but it's 14nm vs 7nm. Pretty crazy how efficient Nvidia's architecture is but it's nice to see AMD closing the gap.
Edit: I lied AMD didn't confirm - its a rumor and it's total board power not TDP.
"According to our sources, the AMD Radeon RX 5000 series is said to feature two variants, a 180W TDP model with 225W TBP (Total Board Power) and a 150W TDP model with a 180W TBP. AMD showed a demo of their Radeon RX 5700 graphics card against the GeForce RTX 2070 which itself is a 180W TDP graphics card."
Could you provide your source?
I cannot because I was incorrect in my reporting. I updated my post to reflect that. Sorry =(
Let's try to provide facts when available.
Do hope the TDP isn't too much above my 480, while performance is ~2070.
$499 wouldn't make much sense to me, though...
What I would gather the Navi 5700 whatever seems to be Vega 64 but with 150TDP in performance sense.
Actually it could make sense that there would be then 5800 in 180TDP. I hope this is the case lol
You may have missed AMD saying that 1st of their new architecture will be designed in way that they will have maximum backwards compatibility.
This means for existing games. And therefore they can reuse most of driver code. That's nice thing as they get to do optimizations of code later.
I went through same changes in driver code Phoronix did and I came to different conclusion than them at same day they had Navi 100% as GCN.
Nothing through last 4 years pointed towards Navi being GCN. Only people with short memory who seen "Next-Gen" after Navi on roadmap concluded that it means Navi is still GCN, but AMD used that always for anything that did not have officially announced name.
It is funny to imagine MS and SONY sitting in the room with AMD. And AMD making them proposal along the lines: "We can give you Navi, last of GCN and then we will shift our architecture away, giving developers trouble with optimizing for one more architecture for all multi-platform games."
Its not like the previous iterations of the consoles used the very latest tech, iirc.
It will be custom silicon, but on proven tech generally.
The added value will be in the custom bits i expect
There is CPU and there is GPU. Do you understand fundamental differences in execution of code via those two?
With CPU, you have quite slowly evolving instruction set. And main difference in between CPU architectures is in how many instructions you execute in each cycle.
Consoles used Cat cores which had lower IPC, but still sufficient instruction set. Handling total performance via core count.
GPUs on other hand have something you can call pipeline. There are stages through which data for images have to go. They are handled differently in each architecture and each has different throughput. Optimize Game for PC Navi with 5 times geometry throughput than Console with 1st gen GCN would have and you are in ugly surprise and need to redesign everything for console release.
Easiest way to understand is to check comparison of VLIW5 and 4, where VLIW4 lost a lot of performance per CU in older games using more complex shader code as it was optimized to maximize use of VLIW5.
Then consoles used GCN GPUs. Imagine that they had VLIW5 based GPUs (yours: "proven tech") instead for all those years AMD pushed GCN on desktop.
That would be quite some difference in achievable image quality or performance. Consoles would suffer.
Getting GCN at its last iteration would be same as 1st Navi will be technologically similar jump as from last Excavator to Zen.
See those 3 things for simplification:
geometry => shading => rasterization
Now imagine that Last GCN will have normalized performance of 1 in each category for GPU with 8Billion transistors:
1 => 1 => 1
Then you have Navi which will have different performance multiplier from this normalized 1 on 8Billion transistor GPU:
1.75 => 0.85 => 1.35
When you make game primarily for console and port it to PC, you'll have to slightly reduce shader demand and will waste a lot of geometry throughput and some rasterization.
Or game optimized for PC would have to have drastically reduced geometry demands for console and reduced rasterization requirements too.
(Who would like to develop games around such differences?)
You can see best and worst case scenarios for AMD and nVidia between Radeon VII and RTX 2080. Total difference there is about like 35% in terms of fps depending on which architecture it was optimized. And one can say that there AMD played generational catch up depending on what nVidia improved and pushed into games. But consoles would already start in bad place while this race desktop would continue.
So, would not it be better to make your gaming console on architecture which would be:
1 => 1 => 1
with desktop to level playing field at least at start?
It will be interesting to see Navi's performance. In expectation of the upcoming new GPUs, some review sites have recently redone their review tests in May 2019.
Based on the retesting done by two sites, ~10% faster than a 2070 would put Navi on par or better than the Radeon VII with regard to relative performance across games tested. Hopefully other review sites will retest existing benchmarks to avoid any outliers when comparing results across reviews.
Spoiler: Testing sites
Thats quite a reply.
I was replying to this part of your post : "We can give you Navi, last of GCN and then we will shift our architecture away, giving developers trouble with optimizing for one more architecture for all multi-platform games."
And they'll take Navi, and get custom bit added to it, and do what they want with the bits they can.
Easy and ready to program for, and backwards compatible with existing consoles.
If Navi is still largely based on a GCN rehash, thats still proven tech.
Puma was proven tech, and tweaked by the consoles.
And in a few years or so, they'll pick the next proven tech to go with.
Backwards compatibility might be a different idea then.
if I see this right its a v64 and v56 launch at a cheaper price and less power.
all we have to go on what we have seen it's a card that runs about as fast as a v64.
so do tell if that's the case where do the vega users upgrade to?
just saying not trying to be a dick
These Navi cards are mi-ranged mainstream cards. Big Navi is next year.
What big Navi? There will be one more card beside 5700 and thats a 5800. Rest of the lineup will be rebranded or something to fit below while RVII stays at the top.
A lot of people here seems to be talking out of their ***. I watched lot of videos on youtube about AMD at computex and pretty much everyone say it's a new architecture. How new it is and how much it borrows from GCN it remains to be seen but it certainly doesn't look like it's a rebrand like the Radeon VII. Outside of that not much is known outside of the charts AMD provided but those are never really reliable.
Radeon VII wasn't a rebrand. It's the Radeon Instinct GPUs reworked from deep learning to acceleration.
While it's been confirmed Navi is GCN based, Navi itself must be something to already have board partner support. I see it replacing the original Vega, seeing that it will be cheaper to manufacture thanks to not using HBM and also lower power draw as well.
This is pretty much a re brand. Not quite but you know what i mean. It's not really a new card.
Confirmed by who? People watching some linux drivers code on git last spring? Cause that's the only thing i can find and honestly i would not call this confirmed. I might be mistaken but realistically the only thing it proves is that it uses the same base instructions set. Doesn't mean much CPUs has been x86 and x86-64 for ages.
Again i might be mistaken but it doesn't really prove the chip will be a refinement of Vega retaining all the problems of it which is what a lot of people imply when they say it's GCN. It's supposed to be a new architecture based on the same instructions set. That's what i read anyway.
I've not been following AMD for ages. Been on Intel/nVidia for the last 15 years. But since when an instructions set is such a big deal? Architecture matters a lot more. You can improve A LOT using the same instructions set. I always thought GCN was an architecture as well as an instructions set. If it's just an instructions set then it's probably not a big deal at all.
From everything i read it's a brand new architecture. I don't care much about the instructions set personally.