Discussion in 'Videocards - AMD Radeon' started by OnnA, Oct 29, 2020.
This is what im waiting
AMD Navi 31, the first desktop chiplet-based GPU?
Please note that this post is tagged as a rumor.
Navi 31 shaping into a true compute monster.
We have heard rumors about the upcoming Navi 31 GPU for a while now. In fact, there have been rumors about Navi 41 already. The Navi 31 might be AMD’s first MCM (multi-chip module) design. NVIDIA is too expected to take the same route with its Hopper series, however, it remains unclear if the architecture is meant for gaming or compute workloads. On the other hand, AMD made it clear that RDNA3 has Radeon DNA and it is for sure aiming at the gaming market.
The successor to Instinct MI100 (no longer called Radeon) based on Arcturus GPU and CDNA architecture will compete with NVIDIA’s Gx100 compute chips. The CDNA is more than likely to take the same path with multi-chiplet design at some point in the future – it is simply easier to synchronize simple compute workloads across multiple dies rather than complex graphics. Even Intel’s Xe-HP architecture will be based on ’tiles’, which might be the industry’s first attempt at GPGPU chiplet design.
Full article :
Does anyone tried SAM/BAR functionality on Intel z300 boards yet?
I know GB just released some updates with it, for like z390 aorus pro, so I am curious how it works now, as recently I got reverb g2 VR set, and I am looking for performance boost...
Atm in like openvr benchmark I have 26fps(while 3080/3090 are like 29+) and, on top of scores sits 5800x with 6800xt and freaking 50fps. Either bug, or SAM just did that.
Btw I just set my son's 5700xt on eBay (he got my 2080s), and 3days still, yet the price went from 250e asking to 475e now. Pure madness.
Poked around a bit more with Wattman and the new 20.1.1 drivers working out what can be set and what has an effect.
Best I can tell it's fairly simple but also kinda limited, increase the power draw and the GPU can scale otherwise it quickly hits a power limit so while small those 15% do help.
Since it works with offset voltage and only allows a decrease here next was to clock down the GPU see how it scales then clock up see where it locks to a static (max) value and that gives a range to work with before it caps to max.
Going to need some further testing and a bigger range of titles and workloads to solidify these results for the GPU but for now this 6800 GPU can handle up to 2400 Mhz fluctuating between 0.980 to 1.0mv and of course the full 1.025 maximum.
So with a -25 offset to 1.0v I can keep around 0.955 to 1.0 at roughly 2350 to 2380 Mhz with a quick session in Assassin's Creed Valhalla and Watch Dogs Legion.
Pushing into and over 2400 Mhz locks to 1.025 and while a voltage offset applies the resulting clock speed is lowered so it servers little purpose.
Min-maxing the actual voltage figure via More Power Tool is where any further gains combining a lower voltage, power draw and resulting temperature with a effective GPU clock scale would have to be done.
Next up would be to maybe find the stability threshold since 2400 is roughly where it caps to the maximum allowed voltage but I've only tested up to 2450 and that held but chances are above 2480 to 2500 it's going to hit display driver issues from lack of additional voltage if it even scales at all since it might also cap out shortly beyond this without additional wattage.
A fun little experiment it mostly retains the same clock values though slight dips below 2400 but I can shave off a little voltage without taking a larger hit to clock speeds at least in these tests and the resulting temperature reduction is nice too.
Settling around 70 degrees Celsius for the junction temp with a reduced fan speed as a bonus holding below the 75 threshold where the card would possibly otherwise start throttling.
(Summer months might need a higher speed though. Nice and cool now but that's going to change. )
EDIT: It's kinda simple in how it works just finding where the voltage goes to a static value and from there if you want to use a undervolt you get a little range to work with and resulting changes to temps and such but overclocking well there's some headroom at least and the gains to be had are better than how it was Navi10 and it's scalability.
(Speed wise though pushing to 2.5 Ghz even if the default clocks are close to begin with is impressive.)
Could probably clock it down and keep a default or lower power draw if desired but the minimum has a cap too for whatever reason so savings would be limited as a result.
MPT could help here too though if decreasing the value further isn't hitting any restrictions.
And for min-maxing these values and gains it seems you kinda want to use this utility because Wattman is going to be limited due to how it works with Navi20 otherwise.
(You have to change the defaults and scale from there instead.)
Wonder if the 6800XT might be ideal, few less cores than the 6900XT but it has the other hardware along with higher power draw defaults, voltage limit and the clock speed cap.
6800 might have some scalability too but the lowered clock speed cap is also a thing as is the other cores and other hardware differences.
(GPU binning might also be better for the 6800XT though how much it means I can't really say.)
Still close to one another though not just the 6800XT and 6900XT due to the same power draw target.
And if the drivers could also change up the scaling then the CU's and other hardware differences could result in a bigger spread here not just 5 - 10% between each GPU.
I must say, I like my merc319.
This is a beast. Hoovering 2500Mhz almost non stop, with junction around 80 and GPU temp around 62-64 degree, while playing Q2RTX is owesome(dunno why, now Q2 works with this card, on previous 12.2 driver).
Mem is 2140, custom fan curve, no zero rpm as, well, I do not like it. Kinda have feeling, 1 sec too late with RPM up and my mems can go bye bye, and I wouldn't like that.
Now...I need to know, will there be support for my z370 motherboard, or I will gave to buy some z390 MSI/GB board to have it. Asrock does not answers in this subject. Yet.
Boys, I dunno what these drivers did but I can now almost peak 2800 MHZ STABLE on my 6900 xt. Seems like a 100mhz offset is what these cards like now. Got a 4 FPS boost in cyberpunk 2077 as well. Yikes guys these cards just keep getting better and better. 2791 mhz peak boost clock and holding. Hell yeah.
Which drivers are you talking about ? the new ones for Hitman ?
I'm still not getting proper usage out of my 6900xt when I overclock, if I set it to 2700mhz it will runn lower than that, like 2500mhz roughly... I'm getting lower scores than average on TimeSpy than other 6900xts, like in the low 17k's... meanwhile other guys are getting average of 18k
Do you mind briefly telling me what your method of overclocking is ? I'm just currently using AMD radeon software... I came over to AMD after years of Nvidia cards and MSI afterburner
Best Score I could get. Most of my runs are +/- 50 points depending on temps and have to have fans really ramp up.
Haven't tried MPT all done with Radeon software. Was hoping to be able to break 20k
@ogiebogie Can't get more from GPU? like 2600-2700Mhz stable with 1.167mV set on GPU?
Powercolor 6800 xt Red Devil, I think I got a good unit, I think I would go up a bit more
Seems like you did. Mine caps out at 2600mhz and after that is just artifacts.
I had no artifacts passing the test, I did not try to upload it anymore either!
Ive run mine on timespy also https://www.3dmark.com/3dm/57323675?
I can't. T-Junction gets to hot and it will throttle down
up to 30-40C difference between GPU temp and hot spots.
Some of my results using MorePowerTool with a Sapphire Nitro+ RX 6800 XT:
I have no SAM enabled, because I have a Z370 Motherboard.
CPU: i7-8700K @4.7GHz 1.245V ---> possible CPU limit at some points of the benchmarks
Memory: 2x8GB G.Skill Flare X 3200MHz CL14 1.35V @3866MHz CL17 1.425V
The GPU cooler is stock, I just used 100% vents
If you want a 24/7 setting for an RX 6800 XT, you should aim max. voltage of 1050-1075mV using MPT! Wattman is useless...
For me a 2550-2650MHz range is stable at most of the games with 1050mV, but 3DMark is crashing. Above these voltages the powerconsumption is getting way more higher, like +30% powerconsumption for +5% performance , but there isnt a big extra cost going from 890mV to 1050-1075mV, the extra pwrconsumption is almost in pair with the extra performance
Hey, what settings I should enable/disable to get rid of that "Benchmark tessellation load modified by AMD Catalyst driver, result invalid."
It seems you got a great chip.
For me without MPT voltage limit, I can achieve frequency in range of 2580-2600MHz at the setting of 2650MHz@980mV in Wattman (15% power limit)which give Time Spy score of ~19300.
When applying lower voltage with MPT, any voltage below 1110mV is unstable if I set 2650MHz in Wattman.
I set the lower limit in MPT and then try to find in Wattman a voltage at which 2650MHz setting will be possible. I do find such settings but if you do more than one run of Time Spy it will eventually crash.
This indicates to me that my GPU needs higher voltage to achieve 2600MHz clock stable all time.
As mentioned above without limiting the voltage I get very stable 2650MHz@980mV setting. I get 99.7% score in Port Royal Stress test.
If you want stable 2600MHz average, you need to set max. voltage to 1075mV using MPT.
At Wattman you need to set 975mV. Clock range to 2550-2650MHz.
+15% pwr limit, memory 2150MHz Fast Timing.
Forget what you set in Wattman, always check at GPU-Z how much voltage your card is getting.
If you set the voltage in Wattman higher, card gonna boost to max range you did set. (possible crash)
If you set the voltage in Wattman lower, card gonna boost to the minimum range you did set, OR in some cases to lower levels than your minimum! (performance loss)
This is why you should find the "correct" voltage-setting in Wattman, so that the card is staying in the middle of your range and never going too low or too high.
Above settings should work with the January-Drivers. With December-Driver you can set the voltage in Wattman more lower (to 915mV).
Just run 3DMark and check at GPU-Z your clock-speeds + voltage, so you know everything is working how you wanted, but as I wrote, with the settings above you should be fine, for 3 persons it did work perfectly
Ran another Fire Strike Ultra test and found my highest scores yet with the 6900 XT with a stable max boost clock set to 2791MHZ, and a 13.7% uplift over the average RTX 3090 tested on 3D Mark.
What you mentioned about new driver and 975mV is close to what I am using i.e. 2650MHz@980mV are my setting with 15% power limit. No memory overclock.
I already wrote about that and about the new driver last week.
Here is the link:
Simalar to what you mentioned about MPT has been already tried by me during the last weekend.
I spent lot of time testing MPT Max GFX voltage setting from 1120mV down to 1070mV in steps of 10mV.
Believe me the setting you use on your card are not stable on my card.
No stable setting below 1120mV in MPT for me possible.
Of course after setting with MPT max GFX voltage, I tried in Wattman voltage setting from 1010mV down to 920mV to set 2650MHz doing Time Spy for every 10mV steps.
At 920mV it was somewhat stable in TimeSpy with MPT GFX voltage max at 1090mV but failed in Port Royal Stress test with complete PC hang requiring restart.
The other two guys that you mentioned using same setting as you should also be lucky that they have similar performing cards. Usually it is not very common that exact same undervolt/overclock settings will work on multiple cards.
I don't look only on materics in Wattman for frequencies and voltage, I use HWiNFO64 to write all data into log file that I can nicely analyse using excel. I can for example look at peaks, averages etc.
With GPU-Z one can also do logging but HWINFO64 provides more sensors from the computer at one location to monitor them all.
For example, my Corsair power supply (HX1200) has built in monitoring so I can also record power consumption of system using HWINFO64 during benchmarks and I can do approximation for total GPU power consumption using this data.
I don't use iCUE SW from Corsair. That SW is horrible and causes many problems with ASUS software for example it breaks ASUS Aurora Sync and AI Suite.