Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 3, 2015.
I think AMD are quietly confident.
My thoughts exactly, with some high hopes that it will rock in 4k gaming. But we shell see, after all, E3 is just arround the corner.
:eyes: I meant they NEEDED water cooling to get acceptable temperatures. No company is going to make a card with water cooling for the "rice factor."
hahahahhaa made my day :wanker:
I'm very excited about this card, finally something new. Eager to see it's price.
or amd might have a 22nm option not open to nv ,only small chips on 14/16nm some time in 2016
maybe a amd 22nm big chip could be possible in 2016 when a big 14/16nm can't be made.
Silicon On Insulator (SOI)
During the question and answer session I asked Dr. Caulfield about GlobalFoundries SOI plans. He replied that they are developing a 22nm process in Malta for manufacturing in Dresden. The goal is 14nm FinFET performance at 28nm costs. This would certainly be an interesting process if they can meet that goal. I do worry that GlobalFoundries appears to pursuing a lot of different directions for leading edge processes. IBM has a 14nm FinFET on SOI process with trench DRAM they will presumably have to support for server products, GlobalFoundries and Samsung have a 14nm FinFET on bulk process for general foundry use and GlobalFoundries is now developing a 22nm SOI process. That strikes me as a lot of leading edge processes for one company to support.
You forgot the GPU Boost Clock. 980 GPU clock mush higher than TitanX
GTX 980: Core = 2048 (100% if the same clock)
Titan X : Core = 3072 (150% on the clock)
GTX 980: GPU Boost Clock = 1216 mhz (100% in reality clock)
Titan X : GPU Boost Clock = 1076 mhz (150*1076/1216 = 133% in reality clock)
Result: GTX980 100% and Titan X 133% which is pretty accurate.
Just a simple Math.
It's just a scale based on each resolution. You cant compare between those two graph.
Exactly. HBM performance is a wild card. We dont know how well it will perform. Positive or negative, but I doubted in second case
You do not have to defend wtft site. They do estimation, good, bad, whatever. One does not go there for solid information, that is why most of us here take threads pointing in that direction as full-on-rumors.
As far as HBM goes, what it does is simple. It removes any and all performance degradations caused by bandwidth.
Take your card and bench it at default clock. Then increase vram clock till there is no performance benefit. And that is bonus HBM gives right there.
I did it with mine some weeks ago just for comparison, I gained maximum of 3~14% boost in performance (based on type of game used).
While 290x has 37.5% more shaders and TMUs, it has 100% more ROPs (doubling pixel fillrate). Memory bandwidth went from 264GB/s to 320GB/s that's only 21%.
There is much to be extracted from such larger GPU with proper bandwidth.
HBM has dual command queue, I guess it reduces impact of cache misses as a bonus.
While I do not think someone can OC 290x vram to point it no longer gives bonus to performance, with several sub step measurement we can approximate where it would end if it had HBM.
Then you can take that for base of Fiji and multiply it by architectural changes for start to get some rough approximation.
You'll not know how big or small Fiji has to be to compete with Titan X/980Ti, because sites do approximation based on 290x which is somewhat limited by memory bandwidth.
Guessing you haven't seen GigaByte's waterforce 3 way SLI 980's then?
The GTX 980ti runs @ 85 celsius, why isnt that hot? Also, in theory it only needs 75W less Why people act so stupid? (or they are not acting?)
^ Good plints people fail to see, My gtx 670 reference also would reach close to 85 degrees befor i modded the crap out of it with an aio and custom ramsinks.
I'm not using the ingame thing, I'm using MSI Afterburner. Some wierd differences going on between AMD and Nvidia. I'm using similar settings to you, but Shadows ultra and Nvidia soft shadows, but at 1440p. Although it's kind of on topic as we are talking about Vram, it's not fully on topic.
If I was going to buy one (which is highly unlikely as I already have a 980) I'd have to check if my monitor can do 1440p using display port. It can only do 1080p via HDMI, so obviously I can't use that. I've never used display port so would need to buy a cable.
First, I'm not defending wccf. I'm just pointing out that ANYONE can do the Math which is the easiest way to approximate a GPU performance based on same architecture.
Ex. Fiji have 45% core more than 290x and 5% higher clock.
Result: 290x = 100% , Fiji = 148%
Some thing like this.
...290x is limited by memory bandwidth ?
If that true, Fiji performance will be hit the roof...
I think that it's because the new AMD chip is running that high out of the gate without OC.
The 980Ti runs that temp with OC - which points to no headroom on the new AMD card.
Another matter is - does the new AMD chip need OC to run fast, only time will tell.
But in my world OC is free performance and we all wan't that right ???
I'm actually very calm. I just proved you wrong and will do so once again.
I'm not a fanboii. Far from it. I will buy whichever card gives me the best performance in my budget. Have always done so, will continue doing it.
On the other hand, your specs say '390x when launched'.
Firstly, I have some news for you. Fiji will be called R9 Fury X. The 390x will most likely end up being a rebrand. It's sad that you don't know the difference. It's even worse if you're upgrading from a 290x to a 390x. Which is practically the same card.
Secondly, I have a question. How can you be so certain you'll buy Fiji when we don't know anything about the card yet? Are you certain it will surpass the 980Ti? Are you aware that so far the reported launch price is higher than the 980Ti? No. You're not. Because you are the brand loyalist, prietene. Not me.
You think coming here and talking nonsense about how HBM being faster will somehow negate the huge drawbacks of having only 4GB of VRAM impresses anyone?
Do you even know what happens when your GPU runs out of VRAM? It will stream data from the system's RAM. In case you weren't aware, in this situation HBM won't help you at all. How do you think the GPU will behave when you're streaming textures into it at only 20-30 GB/s as opposed to HBM's 640GB/s? I'll tell you what happens. Your framerates will go down so hard you can't even imagine.
In short, the moment you pass 4GB of VRAM the game will be unplayable.
Now don't you come to me saying that the low 4GB of VRAM doesn't matter because of HBM. The only thing you see is the 'AMD' logo and you're already pulling your money out. Sad, really.
You've basically proved that you have no clue what you're talking about. This conversation is over.
Very well said!!! Stick to the topic guyss and dont forget, you should not support overpriced products, no matter wich company it is!!
People keep saying graphics cards are overpriced now in days, why?
10 years ago a 7800GTX launched at $600. The $ inflation rate has increased in the last 10 years by 21%. So the top card then should cost $726 now. And yet the 980 launched at $550, the 980Ti is $650. So if anything the prices have gone down. And that's despite the fact that Nvidia R&D cost for a new architecture has gone from $600M in 2006 (G80) to $1.2B+ for Maxwell.
R9 290x 4GB vs 8GB running Shadow of Mordor with Ultra settings at 4K shows just that.
So what we want to see when the 390x launches is, how it does the same thing and how the framerates are affected.
I'm inclined to think that there might be some pretty hefty spikes if HBM really is as fast as some state, unless AMD has some special trick in their sleeves that can make 6GB fit into 4GB without adding any additional overhead to the rest of the system.
Just saying, there is no point in a trillion max fps if minimum fps drops add tons of stutter.
You should look up what the 8800 ultra cost in 2006.
I bought 2x Voodoo 2 12MB back in the days, those did cost 500$ + (250$ + each)
Let alone that a graphic card for 2D and D3D was needed on top.
Guy made a script to show how 4GB HBM, will outperform 4GB GDDR5.