Interrupt affinity policies, latency spikes and weird behaviour

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by klunka, Oct 16, 2020.

  1. Smough

    Smough Master Guru

    Messages:
    806
    Likes Received:
    186
    GPU:
    GTX 1660
    No lol. The program read all cores and threads as cores only. I have 8 "cores", even if in reality I only have 4 physical ones and 4 virtual ones. The program reads Core 0 to Core 8 in my case. Core 0 and Core 1 would be Core 1 and Thread 1. It can be a bit confusing, but it's made that way. Shrug.
     
  2. mbk1969

    mbk1969 Ancient Guru

    Messages:
    12,258
    Likes Received:
    10,339
    GPU:
    GF RTX 3060TI
    No, it is not made this way. Physical Core with HT = two virtual Cores. Not "one physical and one virtual". Both HT cores are equal, symmetrical, identical (from OS point of view).

    Using "Thread" is just less confusing for some people, but it is not official term, as I take it.

    PS MS uses term "logical CPU", btw, which can be either physical or SMT core.
     
    Last edited: Nov 7, 2020
    akbaar and Smough like this.
  3. Marctraider

    Marctraider Member

    Messages:
    17
    Likes Received:
    6
    GPU:
    670GTX
    I've fiddled around with all kind of things related to 'MSI mode' and interrupt affinities, and it eventually came down to the following:

    - Forcing MSI Mode only on devices that actually had MSISupported as a registry key by default (This does not include Nvidia desktop cards, they dont even have this key existing by default) had either a beneficial effect, or at the very most not a negative effect. (Stock HD Audio windows driver on various platforms here is not on MSI mode by default yet have the key existing, and i prefer to use this basic driver. Looks to be mostly a compatibility thing. I've fixed audio glitching with this by putting the HD Audio to MSI, as it looked like it shared line based IRQ with Nvidia)

    Either way, forcing MSI on GPU, I've never seen any beneficial gains either with input latency, frametime pacing or even in LatencyMon. And actually could have negative effects.

    Some exceptions here: Forcing Desktop Nvidia GPU to msi does have some 'benefits' but this mostly comes down to hypervisor stuff and GPU passthrough in VM's.

    (My laptop with Nvidia GPU has MSI enabled by default, and I'll keep it that way as well. Maybe this has to do with the fact that most laptops use Optimus technology)

    - Interrupt affinity tweaking only shows real benefit (in LatMon, ISR/DPC) when changing GPU (Nvidia) on anything but core 0. (Other devices are simply not complex enough to show any significant results). However like u/mbk1968 said: Even though the results show a beneficial gain in something like LatMon, this might not hold true in practical scenarios. Who knows what this affects? Might worsen input latency slightly or generate other oddities. Same goes for fiddling around with XHCI affinity and the like. I've had instances where fiddling around with this introduces some pretty bad mouse 'smoothing/lag' in games.

    There is one exception here which in my tests (and can also be found on various sources on the web), that RSS (With MSI-X support) on various cores other than core 0, with powershell command, has beneficial gains in both jitter and latency (Tested from client to router). For Intel network cards this often goes for the I210-T and some other cards that support RSS. Also mostly beneficial to keep with 2 queues and MSI Message limit to 2. Anyway, most of these tweaks are adapter/powershell applet based, not through Interrupt Affinity Tool and corresponding registry locations. (Its basically just a tool to simplify registry modifications)

    Networking is obviously much easier to measure versus what tweaks could affect frametime pacing / latency without some seriously professional equipment.

    Another thing about Spreading messages across cores and the other various options by the IRQ tool; These work only on devices with MSI-X support.

    And for CPU's with HT: On Intel based systems afaik the first cores are all non HT, and the HT comes after the physical cores (If you care about process/irq affinity assignment)

    For AMD its 0 4 1 5 2 6 3 7 (on an 8 core i.e.)
     
    Last edited: Nov 9, 2020
  4. Astyanax

    Astyanax Ancient Guru

    Messages:
    13,226
    Likes Received:
    5,186
    GPU:
    GTX 1080ti
    You're mistaken if you think you'll get reduced input latency, its never been about input.
     
    Smough likes this.

  5. Smough

    Smough Master Guru

    Messages:
    806
    Likes Received:
    186
    GPU:
    GTX 1660
    Don't see how bringing down latency as much as possible could be bad. Even if Latencymon is not entirely accurate, since at times the program's own driver spikes (absolute lol) in general, the lower, always is the better. Keep in mind games make latency spike, really hard. Imagine if you average latency is high and then you play a game. This is why some people report audio crackles when playing a game, due to high system latency.
     
  6. Marctraider

    Marctraider Member

    Messages:
    17
    Likes Received:
    6
    GPU:
    670GTX
    "I've fixed audio glitching with this by putting the HD Audio to MSI"

    Quoting myself. Again if you have issues regarding these matters, then sure trying to fix it with these possibilities could give positive results.
     

Share This Page