Navi RDNA Owners Thread, Tests, Mods, BIOS & Tweaks !

Discussion in 'Videocards - AMD Radeon' started by OnnA, Jun 11, 2019.

  1. MaCk0y

    MaCk0y Master Guru

    Messages:
    629
    Likes Received:
    189
    GPU:
    GB 5700XT Gaming OC
    Can you please use GPU-Z to log sensor data while idling and take note of the time when the black screen occurs and upload it here? I want to see what the core voltage and core clocks are when it happens. If you can, only log the clocks, voltages and loads. You can leave out temperatures.
     
  2. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    I'll give it a try once I'm back, been thinking it might be possible to get it stored if the issue is of the recoverable variety and it might be informative though I have tested with stock, high Wattman voltages and low Wattman voltages before with no change in how this happens but clock speeds could be a factor since these will fluctuate between 0, 6 and 30 Mhz usually for when the GPU is mostly idle. (Roughly for when doing nothing, moving the mouse cursor and then browsing though with HW acceleration disabled since that's still not 100% fixed either.)

    EDIT: New Years in eight or so hours here so will be with the parents and my sister for that but yeah I'll keep GPU-Z running and logging and that should eventually hit this little problem and maybe provide some insight. :)

    Would be interesting if it's something like the Fury or 290 issues though I think part of the 290 and the Vega 64 problems were memory related and then fixed via firmware applied in newer drivers, Fury I'm not entirely sure if the black screen display loss issue was fully explained or not as to what had caused it but again a later driver mostly resolved the situation.

    EDIT: And while the core clocks for min speeds don't apply for idle the voltage does though it curves so inputting 0.9 might see it hit 0.8 to 0.85 for example as one of the ideas I had early on was stability from voltage dropping too low.
    (But that should have been resolved in a earlier driver and now it either caps a bit higher when idle or was resolved in some other way.)

    The entire power play segment for this hardware is new and different so I don't know much about it all and haven't found too much about it either other than some minor details via Linux code commits for this, Overdrive and changes to Power Play.
    (Feels like it's similar to but not quite the same system as to how AMD's Ryzen or particularly the Zen2 architecture and related processor models is constantly adjusting to different conditions or workloads instead of the former fixed performance or power states.)
     
    Last edited: Dec 31, 2019
    MaCk0y likes this.
  3. MSIMAX

    MSIMAX Active Member

    Messages:
    54
    Likes Received:
    10
    GPU:
    3x 290x cfx
    [​IMG]
    [​IMG]
     
    OnnA likes this.
  4. MSIMAX

    MSIMAX Active Member

    Messages:
    54
    Likes Received:
    10
    GPU:
    3x 290x cfx

  5. MSIMAX

    MSIMAX Active Member

    Messages:
    54
    Likes Received:
    10
    GPU:
    3x 290x cfx
    i have no clue if im keeping the card due to the 2020 driver issue.
    powercolor released a bios and had the same issues. Maybe amd really killed off 5700xt PPt mods with new drivers but i cant prove it yet
     
  6. SpajdrEX

    SpajdrEX AMD Vanguard

    Messages:
    2,181
    Likes Received:
    498
    GPU:
    Sapphire RX5700XT
    MSIMAX: what happened to your bandwidth? should be twice more :)
     
  7. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    Maybe from the PCI going into power saving mode from 3.0 8x from the prior image to 1.1 8x in this one.
    Unsure if that reflects on the other stats though come to think of it though I think it should be higher.

    EDIT: Yeah it kinda fluctuates was thinking 380 GB/s but it's actually upwards of 450 GB/s up and down a bit for whatever reason the cards kinda doing it's thing scaling up and down I suppose.

    EDIT: Nope that saving thing doesn't affect the reported value.

    [​IMG]


    EDIT: Huh different revisions, that's interesting.
    (There is a earlier 5700 XT Pulse bios on TechPowerUp with slightly different values but maybe that's not it, curious but it's just what's reported and could be various things.)

    Some of the rates do seem affected by at least the clocks I suppose so it's not entirely static via database or bios reported values at least, I'm at 1.9 Ghz at 1.0v boost and 1.4 Ghz for regular workload with 0.9v for that. (0.9 Ghz for low workloads whenever the GPU decides to stop ignoring that one aside from it's voltage value. :p It does kinda use the voltage though so 0.8v for that and a bit lower as the GPU's at it's idle speeds of 0, 6 or 30 Mhz usually.)


    EDIT: As to memory that's a Adrenaline 2020 thing reporting the doubled value instead of the previous 875 Mhz which I'm trying to test around 1.8 Ghz or well 890 I suppose then depending on how that's read. :D
    (This one does write into the saved Wattman profile too and needs updating depending on if the user is on 19.12.1 or earlier so 875-ish Mhz or 19.12.2 or newer so 1750-ish Mhz.)


    And default install so I ended up on the DCH driver, bit skeptical to it but after comparing it only really seems to be missing the audio service AMD's running DSEManager.exe or what it was called rest is there but it differs a bit from how the INF sets it all up so it initially looked like it was missing a lot of settings (IE Chill, D3D10+ registry data and more.) and some files and running services. (Media playback related files and that audio manager process plus OpenCL files.)

    U***.inf has to be moved away on Win10 Redstone 3 or later (Build 16000 I think and newer?) or it'll default to that.

    Should work just about the same function wise and eventually I assume Microsoft will make it more of a requirement however that will end up.
     
    Last edited: Jan 3, 2020
  8. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    Actually maybe DDR reported as double the data rate in effect means GPU-Z has a bit of a thing in how it reports bandwidth just thinking of it a bit further.

    It's the same in effect but the value is doubled after all.

    EDIT: Err GDDR (6) just to call it by it's actual name.
    VRAM clock speed. :p


    So yeah AMD changes how that's read out and thus bandwidth doubles because it's 2x faster yet not really. (Because it's the same and I'm making this overly complicated again.)

    EDIT: Or the program does take it into account after being updated for Adrenaline 2020 compatibility but then on older drivers it's reporting a lower value and then that goes a bit off as a result. Eh I'm just complicating it.
     
    MSIMAX likes this.
  9. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    Interestingly while it hasn't black screened so far a few days of testing has confirmed that the memory clocks are frequently spiking up to higher speeds even during idle.

    Date , GPU Clock [MHz] , Memory Clock [MHz] , UVD Clock [MHz] , UVD Clock [MHz] , GPU Temperature [°C] , GPU Temperature (Hot Spot) [°C] , Memory Temperature [°C] , GPU VRM Temperature [°C] , Mem1 VRM Temperature [°C] , Mem2 VRM Temperature [°C] , Fan Speed (%) [%] , Fan Speed (RPM) [RPM] , GPU Load [%] , Memory Controller Load [%] , Memory Used (Dedicated) [MB] , Memory Used (Dynamic) [MB] , GPU only Power Draw [W] , GPU Voltage [V] , Memory Voltage [V] , CPU Temperature [°C] , System Memory Used [MB] ,


    2020-01-04 07:30:13 , 7.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 516 , 0 , 0 , 237 , 32 , 7.0 , 0.775 , 0.675 , 42.1 , 5829 ,

    2020-01-04 07:30:14 , 7.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 515 , 0 , 0 , 237 , 32 , 7.0 , 0.775 , 0.675 , 40.5 , 5829 ,

    2020-01-04 07:30:15 , 7.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 515 , 0 , 0 , 237 , 32 , 7.0 , 0.775 , 0.675 , 39.0 , 5828 ,

    2020-01-04 07:30:16 , 7.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 515 , 0 , 0 , 237 , 32 , 7.0 , 0.775 , 0.675 , 37.5 , 5830 ,

    2020-01-04 07:30:17 , 7.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 514 , 0 , 0 , 237 , 32 , 7.0 , 0.775 , 0.675 , 35.9 , 5830 ,

    2020-01-04 07:30:18 , 32.0 , 888.0 , 0.0 , 0.0 , 33.0 , 33.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 515 , 2 , 0 , 237 , 32 , 17.0 , 0.775 , 0.675 , 35.5 , 5914 ,


    That sort of thing and repeating, interesting little behavior from the GPU though not much to it for now but it's something that shows up over and over.

    EDIT: Also while the GPU core clocks often vary between 6 to 30 Mhz it's not immediately tied to memory clocks.

    2020-01-04 07:30:19 , 78.0 , 200.0 , 0.0 , 0.0 , 33.0 , 33.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 514 , 5 , 4 , 237 , 32 , 10.0 , 0.775 , 0.675 , 45.0 , 5967 ,

    2020-01-04 07:30:20 , 57.0 , 200.0 , 0.0 , 0.0 , 32.0 , 32.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 514 , 6 , 3 , 237 , 32 , 9.0 , 0.775 , 0.675 , 43.8 , 5967 ,

    2020-01-04 07:30:21 , 25.0 , 202.0 , 0.0 , 0.0 , 33.0 , 33.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 514 , 0 , 0 , 237 , 32 , 8.0 , 0.775 , 0.675 , 42.6 , 5967 ,

    2020-01-04 07:30:22 , 81.0 , 204.0 , 0.0 , 0.0 , 33.0 , 33.0 , 36.0 , 26.0 , 0.0 , 0.0 , 16 , 515 , 9 , 4 , 237 , 32 , 10.0 , 0.775 , 0.675 , 41.5 , 5980 ,


    So here it fluctuates up to near 100 Mhz idle due to browsing some files and folders on the desktop yet the memory clock remains near it's 200 Mhz idle speed.


    EDIT: Hmm I can also see memory voltage remaining constant even when it spikes, wonder if that's related. Interesting.
    (IE too low voltage to sustain sudden spikes to higher clock speeds just a guess though.)
     
    Last edited: Jan 4, 2020
    MSIMAX and MaCk0y like this.
  10. Passus

    Passus Maha Guru

    Messages:
    1,194
    Likes Received:
    227
    GPU:
    RX 5700 Mech OC
    Anyone know what MSI's stance on thermal paste change is for UK ?

    I want to repaste as the msi 5700's have sub par paste on gpu

    I also want to change the thermal pads on the vram but im worried about warranty

    thanks
     

  11. kanenas

    kanenas Member

    Messages:
    39
    Likes Received:
    14
    GPU:
    rtx 2070 aurus
  12. MSIMAX

    MSIMAX Active Member

    Messages:
    54
    Likes Received:
    10
    GPU:
    3x 290x cfx
    DDU drivers and reinstalled 19.12.1 and ran aida test need something to compare to see if the memory is off or a reporting issue with gpuz

    [​IMG]
     
  13. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    That's correct isn't it? Just that the Adrenaline 2020 drivers report the double rate so x2 the actual value for whatever reason. Would have been simpler if they've kept it maybe reporting the total values separately.
    So 875 Mhz default or 1750 Mhz if doubled as per DDR or double data rate for a total of around 14 GB/s I believe up to around 15 GB/s if pushed to near max of 950 Mhz / 1900 since AMD capped it low though ECC / error correction often kicks in so it was probably a good thing to avoid issues with the memory controller though there might be some headroom for extreme overclocking pushing past the default limits of AMD's drivers / Wattman. :)

    EDIT: Would have made it easier for Wattman profiles to carry over too instead of having 875 Mhz the default written value in 19.12.1 and earlier and 1750 Mhz the default for 19.12.2 and on. :p

    And however this affects tools such as GPU-Z if it either reports correctly and incorrectly on earlier drivers or if it simply reads the doubled values and as a result the affected reported values for the other stuff is doubled too I guess that's how it's doing it currently.

    Navi's not bad but the current cards do have some memory bottlenecks and limits much as they have various strengths and advantages just about hitting Radeon VII performance levels so it'll be really interesting to see how the newer cards could do on something like 384-bit GDDR and maybe more memory too and of course HBM up to 16 GB and additional improvements since Fury and Vega giving even more bandwidth and improvements such as latency and access times I believe it was. (HBCC 2 perhaps or just HBCC itself making a return.)


    EDIT: Plus seeing how this Navi architecture and changes with RDNA actually scales if pushed into 50 - 60 core clusters so 4096 cores or even higher this time. :D
    (ROP's and TMU's also being a bit more decoupled allowing for improving this too whatever that might do although AMD would probably have to bit a bit of a balance cost wise also for what they can do for these cards.)

    Should at least edge out up into 2080 or even 2080 Ti performance levels though close to where NVIDIA will be revealing the 3000 series or what they'll be called for the desktop / general consumer models of cards outside of the professional or workstation environment whatever is first out.

    Well if AMD's going all out with Navi20 hitting 2080Ti performance levels by now would surely be expected and whatever gains above that might be possible, still a bit of a gap after all so we'll see plus drivers and yeah how well does Navi actually scale above this current 5700 card, will be interesting to see!
    (However it goes, some competition on the GPU side for 2020 wouldn't be bad but there's more than the high-end or well enthusiast GPU market segment I suppose for the 2080Ti also so a more complete product launch for low and mid range wouldn't be bad for AMD.)
     
    Last edited: Jan 5, 2020
  14. MaCk0y

    MaCk0y Master Guru

    Messages:
    629
    Likes Received:
    189
    GPU:
    GB 5700XT Gaming OC
    Could be because of the different version of AIDA.

    Default

    [​IMG]

    Overclocked

    [​IMG]
     
    JonasBeckman and MSIMAX like this.
  15. Passus

    Passus Maha Guru

    Messages:
    1,194
    Likes Received:
    227
    GPU:
    RX 5700 Mech OC
    I can overclock my vram to 950 (15.2Ghz effective) and get like 1 fps so no difference at all overclocking it

    biggest jump is in gpu overclocking my 5700 reaches 1975mhz (1925 effective)(stock 1750 - 1700 effective) at 1.150v and that gives 8-10 percent increase
     

  16. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    Yeah you have to increase it by 5Mhz at a time and then decrease once performance lowers as ECC kicks in though it can also just be unstable or might not be a factor at all depending on what's holding up the GPU performance since the way it scales can be related to a lot of different factors so you need something that can push the GPU and also push memory, Battlefield 5 DX12 seems like a really good test since it's incredibly sensitive to instability though benchmark wise I suppose anything D3D12 or Vulkan could also give good results though Kombustor or similar stress tests might not or might push the GPU to throttle from thermals or limitations in the driver profiles for these unlike more regular workloads or conditions though for stress testing and overall stability it could still suffice.

    Some users hit problems at 1760 effective others can push into 1800 seemingly only with ECC lowering performance but not affecting stability and in some cases Power Play modding can push above 950 Mhz which might be unstable but higher might be stable which seemingly comes down to differences between Samsung and Micron memory modules, timing differences and whatever controller AMD has for the GDDR6 VRAM although it should work for stock configurations though it can't be lowered further and overclocking even after 19.8.x improved stability can still be very hit & miss though in tun the GPU core clocks can be more effective. :)

    5700 limits and some models also having less effective cooling against 5700 XT can also factor in but I don't know the differences in VRAM short of the cooler differences a few GPU's have which is bad for how sensitive the modules are plus it's also reporting in and reacting to junction temperature values upwards of 80 sometimes 90 degrees (Celsius values.) and while that might be stable the modules could be using loosened timings or throttle to lower speeds depending on how the GPU handles this.


    EDIT: Well my own experiences at least, early drivers could barely increase it at all or instability and later ones well it changes things but not much because likely ECC is correcting things but hitting a performance penalty trade-off for remaining stable and this can vary from just how much the test application or game is actually pushing the memory components on the GPU.
    (Driver bugs and issues can also factor in to where some titles are just prone to crashing but the settings might actually be stable themselves it's a bit finicky finding what works and how and for which API and some specific games or software and such.)
     
  17. MSIMAX

    MSIMAX Active Member

    Messages:
    54
    Likes Received:
    10
    GPU:
    3x 290x cfx
    not sure how fruitful this will be

    [​IMG]
     
  18. MaCk0y

    MaCk0y Master Guru

    Messages:
    629
    Likes Received:
    189
    GPU:
    GB 5700XT Gaming OC
    I use OCCT memtest for the GPU memory. Anything over 925MHz and I get errors. For the core, I use the 3D test with error detection.
     
    MSIMAX likes this.
  19. MerolaC

    MerolaC Ancient Guru

    Messages:
    3,226
    Likes Received:
    192
    GPU:
    RX VEGA 56
     
    Passus and OnnA like this.
  20. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,041
    Likes Received:
    1,943
    GPU:
    AMD S. 5700XT Pulse
    Experimented with OCCT for error reporting and Wattman with Overdrive finding a better setup than the initial results I've mostly been using.

    Memory doesn't report anything in terms of errors whether default clocks or full 950 so for testing I lowered it down to 925 and will be seeing actual results over a few days instead, early benchmark results in games so far don't show anything either but if it can be proven to be stable it's a increase without drawbacks although there is a slight increase in memory temps so that needs further monitoring.

    Clock speeds start reporting errors around 0.950v and 1900Mhz meaning Wattman and minus -50 Mhz roughly so 1850 effective.
    Settled for 1950 at 1.0v which will need some monitoring but is doing OK so far.

    While working out the boost clock speed Wattman will always keep around 30 - 50 Mhz from the target clocks so knowing that it was mostly just setting an extra 50Mhz and testing around the effective clocks with some additional voltage supplied (Rounded up nicely to 1.0v) to ensure stability just in case it boosts a bit higher in some situations.

    Power slider is a funny one, -1% and it drops to 1680 - 1700 Mhz and at 0% it can reach just shy of 1880 Mhz something to keep testing more to see why.
    Hitting above 1950 (2000 effective or higher because again Wattman is doing it's thing and targets just shy of -50 Mhz for reasons unknown.) starts requiring the power slider to push upwards of 20 - 30% or so for 2050 Mhz / 2 Ghz effective.

    Above 1900 Mhz at least with OCCT there seemed to be a very diminishing return, below it mostly just drops once speeds hit 1700 or lower so instead of 1 - 2 FPS difference it'll be 10 - 15 FPS difference. (Again that -1% in Wattman in the power limit does much the same.) it does however make a big difference to how clocks and the power slider can work together for the watts required and the temperature.

    So from near 70's core and 95 junction clock temps it can drop to almost 50 core and just shy of 80 junction though lowering speeds too much will affect both benchmark and actual game performance much more drastically.
    Thus 1950 and 1900 effective with some testing for stability and just short of the 95 junction temperature at 92 degrees. (72 - 75 core/edge temps.)


    Above 2 Ghz I would probably need a triple fan cooler and either it has higher limit (From 180w to 220w for example.) draw that the power slider can then increase further or it has to be soft modded to keep it effective, might even need the full 50% on this.
    In-game testing and benchmarking both only show around a 3 - 5 FPS difference at most though sure it's a bit of a loss but on balance it's running nicely though another 2 - 4 FPS could lower power draw and temps by another big chunk so that might be used for a second profile for testing and less demanding titles. :)

    Water, modded power limit and seeing just if 2.1 or higher could work might be a thing too but probably better for the Nitro models and others or well the stock GPU with it's over-engineered caps and what not. :D
    (Although without the blower fan because otherwise it'll likely hit that 110 Celsius junction temp limit pretty easily.)


    Short of testing memory though which was the primary intent it was interesting to see just how the power slider works and where it starts to matter depending on the GPU's stock power draw from 180w (Silent bios on Pulse here.) to 194w (Default/OC bios here.) or beyond (220w for the Nitro variant for example.) plus the stress test works well for temperature and stability testing which a few days of overall usage will confirm.

    Looks like some of the voltage values lower when the power slider is reduced which might be why the GPU throttles so hard even from 0% to just -1% but it could be the bios or the drivers or other factors as well but it's a thing so 0% and a lower boost clock might work better than a negative power draw value even if the GPU isn't requiring anywhere near the 170w or lower instead of 180w default power draw target.

    Memory has ECC also so game testing overall general usage and other performance tests will also be useful as if this does kick in it should reduce performance somewhat though my own tests so far shows no difference and barely any registered gains so if it's stable it stays if not well it doesn't seem I'm losing much on stock speeds at least in the current tested titles.
    (That or it's like the Fury GPU where it's hampered by having a somewhat lower core clock speed though above 1900w the current performance returns were kinda low anyway but again different cards and models and well different results.)

    GPU might be possible to "break out" of the current performance limitations too via soft modding and just aiming for even higher values if stable and error free, triple fan is probably required with good or a bit lower ambient temps and case temperature values short of aiming for water cooling but a power limit of 100% to 150% instead of "just" 50% and targeting 2150 core clocks and 960 or higher memory might be a different show assuming voltage doesn't push into too high requirements.
    (1150 worked here for 2 Ghz but 2100 might need the full 1200 already so pushing into unsupported 1250 to 1300 might have adverse effects or it could just work, different cards, different results after all.)


    Well if nothing else I did learn the power slider gets funny at least with the current drivers if trying to merely use that to reduce clock speeds, it works pretty well from 0 to 20% or so when adjusting around 1850 - 1950 but lower yeah it drops by larger than expected from just a single percentage into negative. (And not too much after that but I only tested a few lower values admittedly hovering around 1600 - 1650 Mhz at the lowest for core clocks.)




    EDIT: So instead of 890 Mhz memory it's now 925 or 1850 effective a bit shy of 1900 max but mostly for testing if anything shows up lowering it down to under 900 or even stock seems trivial for how little it seemingly affected what I tested at least. (If it works it works if not it doesn't seem like a major performance difference but there are probably situations where this is much more important too.)

    And from 1850 due to well Wattman being conservative to 1900 or 1950 but then Wattman is doing it's thing so a bit extra voltage after testing but still down a good -200 from the default of 1200 lowering temperature readings nicely and power draw somewhat although that can be offset entirely by using a higher power limit slider setting if that's the intent rather than temperature from less GPU core voltage.

    Even bigger temperature reductions and even less voltage and overall power draw is possible but as expected after a point it's not 3 - 5% it's 10 - 15% of a performance difference or more and in more demanding titles or programs this does matter though for less demanding stuff well I have a second profile now for toying around with for that ha ha. :D


    What else? Eh the fan speed does it's thing often boosting up to high values and gradually throttling down kinda keeping to something like the specified settings but often exceeding them with this particular behavior.

    Also goes all out on 95 degrees junction and bios targets for the GPU are not affecting this behavior so if that gets exceeded the fan will ramp up more and bring the junction temp down and then quiet down a bit.

    Targeting a core or surface or edge temp of 70 - 75 Celsius and junction of anything from 85 to 94 Celsius works well to avoid the GPU fan boosting up and down time and time again, different GPU models and overall ambient temps and case temperature will set the differences between the two readings but in my case it's around 18 degrees Celsius bit lower at less stressful situations for which this test also works well to see what the limits are. :D


    Think that's about it for now, still just starting to get a bit more familiar with this and well a few more days and seeing how it holds up and whatever Wattman is doing and of course re-testing in newer drivers particularly if the GPU clock speed target changes.

    I added some extra after confirming no errors in regards to voltage while also testing with a higher core clock speed further confirming no errors for that too but more could be required if the GPU has a little spike moment or some game gets funny in how the GPU core is doing ha ha.

    I expect it to require at least some tuning, overall it's a balanced value mostly somewhat reducing voltage for core temperature reductions which could be lowered further but it starts to affect overall framerate in more noticeable numbers both for this test and other game tests that were also done for comparison and stability testing for this first go at it.




    EDIT: Week or so if the system can keep stable should do it for confirming most of the tweaked values if not well keep testing and tweaking. :)
    (Monitoring of the power slider and other GPU voltage values might confirm or at least hint at why this is a thing too but it's a lower priority for now though a fun little discovery or something like that.)

    So Division 2 on settings a bit too high for the GPU and from 56 to 54 FPS or ~3% loss.
    Division 2 which served as the primary normal game test on ini edited too high settings above in-game limits and it's 42 to 41 FPS or ~2% loss.

    More game testing is needed but upwards of -5% or so keeping junction temp at full blast below 95 Celsius and the Wattman "Fan ON!" target limit and the surface sensor around 73 some Celsius.

    Hoping memory junction is around 85 degrees max (Think 95 C is the threshold here comparably to the 110 C GPU core but lower is better.) but GPU-Z will be showing that if a few hours of in game testing goes well and whatever it edges out at.
    (Optimism I suppose since the Nitro has the big block on the RAM and the Pulse has the not-quite-as-big block on these so there is a difference in effective cooling plus the third fan and airflow amount and such things well I'll know soon how it's looking for these sensors.)

    Otherwise well if it's not doing much to performance so far it's going down a bit but overall we're changing 875 Mhz to 950 Mhz max or 925 Mhz here though it does still generate more heat even if the overall increase is seemingly on the low side at something like +50 Mhz only but it's a thing.
    (DDR and also GDDR and GDDR6 at that and who knows what the mem controller and timings are really doing plus Samsung or Micron type with this likely being Micron clocking higher but will it hold well guess I'll find out. :p )



    And lots of text but yeah I tend to be pretty wordy trying to be a bit more thorough although it's a bit messy because I kind write like I'm taking short notes. (Well, short of sorts!)

    Well it's a thing I guess. (Lots of guessing here!)



    Tinkering with clock speeds, got results, power limit slider gets funny at negative values.

    There, I summarized it. :D
    (Reads like a shortie telephone SMS now but I guess it's better than modern leet-tweet language heh. <- Sign of getting old I suppose. :) )



    EDIT: Some testing later. One hour Division 2 and three hours Assassin's Creed Unity.

    GPU clock speed 1850 - 1900 Mhz. (Varying often around 1880 average.)
    Memory clock speed 1850 Mhz effective. (925 Mhz)

    GPU temperature sensor 60 Celsius.
    GPU junction temperature sensor 75 Celsius.
    Memory temperature sensor (Junction too I believe.) 70 Celsius. (Not bad for what I was expecting.)
    VRM's 55 Celsius.

    Fan speed ... 70% (Wattman should be at 50% if the actual settings were followed, 60% for 80 Celsius and 70% for 90 Celsius.)

    GPU voltage 1.050 (So a bit of a difference for what's set then, not 1.0v but well it works.)
    Memory voltage 0.850 (What? I expected much more here around 1.25v to 1.35v actually.)
    (Error? Sensor issue? Driver quirk? Reading error I'd hope though well kinda has to be it wouldn't be stable otherwise.)

    GPU power draw 160 Watt. (Expected more here, guess it does also fluctuate to lower values then though total would add another 20 maybe 30 so that does close in on the 195 if that's how it's working.)



    EDIT: Yeah just how VRAM voltage is reported it seems so it's nothing but still a bit strange as the default should be around 1.35v for load but then there's probably this as the idle voltage. :)
    (Though there's also varying states as clocks can fluctuate a bit so at least some mid value too.)
     
    Last edited: Jan 12, 2020

Share This Page