Windows: Line-Based vs. Message Signaled-Based Interrupts. MSI tool.

Discussion in 'Operating Systems' started by mbk1969, May 7, 2013.

  1. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    @ELKR

    So what did you use to switch devices to MSI mode? PS script? I just removed it from OP to avoid automatic switching...
     
  2. Matrixqw

    Matrixqw Guest

    Messages:
    9
    Likes Received:
    0
    GPU:
    GTX 260 1GB
    I installed Windows again and msi mode is now working ok.
    Maybe it was some driver issue.
     
  3. mh0001

    mh0001 Guest

    Messages:
    4
    Likes Received:
    2
    GPU:
    EVGA RTX 2080 XC
    Hi!

    I just learned about this MSI mode stuff and wanted to check which mode is enabled for my hardware.
    I checked with device manager first and found that actually all PCI devices except my Nvidia GPU and it's HD audio device already have a negative IRQ by default.

    So I used your tool to try switching the remaining two devices (Geforce RTX 2080, High Definition Audio Controller) to MSI mode as well, which worked fine so far.

    The only thing that looked strange to me and what I would like to ask you:
    All PCIe Controller and PCI Express Root Ports have negative IRQs, but in you utility the "msi" checkbox for these devices is not set, and "supported modes" is just empty.

    Is this normal and should it be like that? Or should I try to tick the msi boxes for these devices as well? From my understanding, the negative IRQ is already a clear indication that it's in MSI mode, so I assume this is fine?

    screenshot: https://imgur.com/a/gqFjrB9
     
  4. Astyanax

    Astyanax Ancient Guru

    Messages:
    16,996
    Likes Received:
    7,337
    GPU:
    GTX 1080ti
    PCI-E controllers always operate natively at MSI's, infact all PCI-E hardware does even when windows reports Legacy interrupts, theres just a translation layer at the software level involved.
     

  5. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    This means that they work in MSI mode without corresponding registry values. Negative IRQ number is the only evidence of MSI mode, so if you see negative IRQ number do not bother with checkbox.

    PS Utility reads checkbox states from registry, so if registry value (for some device) says "MSI mode ON" checkbox is checked.
     
  6. mh0001

    mh0001 Guest

    Messages:
    4
    Likes Received:
    2
    GPU:
    EVGA RTX 2080 XC
    Thanks for the explanation!

    @Astyanax: So does that mean it makes no difference latency-wise whether the GPU (being a PCI-E hardware) is switched to MSI mode or not? Or can this translation layer you mentioned still have bad effects? I remember from Geforce driver 418.91 that they even recommended switching to MSI mode as a temporary workaround to mitigate some problems.

    So is it best practice to switch the Nvidia GPU to MSI mode when it's possible or should it just be left as it is?
     
  7. Astyanax

    Astyanax Ancient Guru

    Messages:
    16,996
    Likes Received:
    7,337
    GPU:
    GTX 1080ti
    There is a hit at high cpu loads when doing software layer translations from irq to msi's.
     
  8. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    @mh0001

    Here is extract from Intel`s paper linked in OP:

    Legacy XT-PIC Interrupts
    Legacy XT-PIC interrupts comprise the oldest form of interrupt delivery supported by a PCI device. XT-PIC interrupts use a pair of Intel® 8259 programmable interrupt controllers (PIC). Each Intel® 8259 PIC supports only eight interrupts. By daisy chaining two 8259 PICs, a system could have 16 interrupts, 0 – 15.
    When a connected device needs servicing by the CPU, it drives the signal on the interrupt pin to which it is connected. The Intel® 8259 PIC in turn drives the interrupt line into the CPU. From the Intel® 8259 PIC, the OS is able to determine what interrupt is pending. The CPU masks that interrupt and begins running the ISR associated with it. The ISR will check with the device with which it is associated for a pending interrupt. If the device has a pending interrupt, then the ISR will clear the Interrupt Request (IRQ) pending and begin servicing the device. Once the ISR has completed servicing the device, it will schedule a tasklet if more processing is needed and return control back to the OS, indicating that it handled an interrupt. Once the OS has serviced the interrupt, it will unmask the interrupt from the Intel® 8259 PIC and run any tasklet which has been scheduled.

    IO-APIC Interrupts
    Intel developed the multiprocessor specification in 1994, which introduced the concept of a Local-APIC (Advanced PIC) in the CPU and IO-APICs connected to devices. This architecture addressed many of the limitations of the older XT-PIC architecture. The most apparent is the support for multiple CPUs. Additionally, each IO-APIC (82093) has 24 interrupt lines and allows the priority of each interrupt to be set independently. The programming model of the IO-APIC is greatly simplified. The IO-APIC writes an interrupt vector to the Local-APIC, and, as a result, the OS does not have to interact with the IO-APIC until it sends the end of interrupt notification. The IO-APCI provides backwards compatibility with the older XT-PIC model. As a result, the lower 16 interrupts are usually dedicated to their assignments under the XT-PIC model. This assignment of interrupts provides only eight additional interrupts, which forces sharing.
    The following is the sequence for IO-APIC delivery and servicing:
    • A device needing servicing from the CPU drives the interrupt line into the IO-APIC associated with it.
    • The IO-APIC writes the interrupt vector associated with its driven interrupt line into the Local APIC of the CPU.
    • The interrupted CPU begins running the ISRs associated with the interrupt vector it received.
    • Each ISR for a shared interrupt is run to find the device needing service.
    • Each device has its IRQ pending bit checked, and the requesting device has its bit cleared.

    Message Signaled Interrupts
    MSI was introduced in revision 2.2 of the PCI spec in 1999 as an optional component. However, with the introduction of the PCIe specification in 2004, implementation of MSI became mandatory from a hardware standpoint. Unfortunately, software support in mainstream operating systems was slow in coming, forcing many MSI-capable PCIe* devices to operate in legacy mode. The MSI model eliminates the devices’ need to use the IO-APIC, allowing every device to write directly to the CPU’s Local-APIC. The MSI model supports 224 interrupts, and, with this high number of interrupts, IRQ sharing is no longer allowed.
    The following is the sequence for MSI delivery and servicing:
    • A device needing servicing from the CPU generates an MSI, writing the interrupt vector directly into the Local-APIC of the CPU servicing it.
    • The interrupted CPU begins running the ISR associated with the interrupt vector it received. The device is serviced without any need to check and clear an IRQ pending bit.

    ********

    So according to this any PCI and PCI-Express device can work in legacy mode - avoiding message signals and using legacy interrupt facilities.

    And here is Intel`s conclusion made after testing legacy vs MSI modes:

    MSI provides a significant reduction in interrupt latency over the previous two generations of Intel interrupt architecture. The benefits extend beyond a reduction in interrupt latency to a reduction in CPU utilization by eliminating the time spent by the CPU determining what interrupt needs servicing (by polling devices and masking interrupt controllers). Embedded developers considering Intel® architecture for a solution or currently developing one should fully adopt the MSI model for interrupt delivery and servicing to ensure not only the best IO performance for their solution, but also the most CPU headroom for user-applications and other interrupts. In summary, MSI provides the following key benefits to the embedded developer over previous interrupt architectures:
    • Increased number of interrupts to support more devices and peripherals.
    • Dramatic reduction in the delay from when a device needs servicing to when the CPU begins servicing the device.
    • Simplified board design: no need for an interrupt controller (IOAPIC/PIC).
    • Flexible interrupt priority assignment scheme.
    • Interrupt load balancing across CPUs. Devices can direct interrupts to specific cores to leverage common caches and to ensure equal workloads on all CPUs.
     
    Last edited: May 30, 2019
    386SX, akbaar, mh0001 and 2 others like this.
  9. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    No, they don`t. According to PCI Express 3.0 specification:
    You can see that the descriptions of legacy and MSI modes are completely different. Specification doesn`s say that legacy mode is MSI mode under the hood.

    PS And legacy mode is emulated in hardware level - "PCI Express provides a PCI INTx emulation mechanism to signal interrupts to the system interrupt controller (typically part of the Root Complex)".
     
    MoKiChU likes this.
  10. Astyanax

    Astyanax Ancient Guru

    Messages:
    16,996
    Likes Received:
    7,337
    GPU:
    GTX 1080ti
    Yes they do, we've gone over this before

    Legacy interrupts are layered over MSI's.

    INTx virtualizes PCI physical hard-wired interrupt signals by using an in-band signaling mechanism.

    PCI Express devices must support both the legacy INTx and MSI modes, and legacy devices will encapsulate the INTx interrupt information inside a PCI Express Message transaction.

    The pci-e 3 page you are citing there is a brief overview of the controllers, its not a physical implementation of the controller - most people can't even get access to documents depicting the physical wiring and layout of the controller.

     
    Last edited: May 31, 2019

  11. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    No, they don`t. Virtualization of hard-wired interrupt signals is going in hardware level and arrives into interrupt controller. Namely (missing) dedicated interrupt wire is virtualized. OS has no clue how legacy interrupt goes from device to CPU - over hard wire or in some message. From OS point of view PCI Express legacy interrupt is exactly the same as legacy interrupt in old PCI.

    MSI uses memory write facilities while legacy interrupt doesn`t. They are different even by description in the document.

    PS In Wiki we see phrase "it uses special in-band messages to allow pin assertion or deassertion to be emulated", which means that messages used to emulate legacy interrupt works differently from messages used in MSI.

    PPS I would not argue if you would write "both MSI and legacy modes use in-bound messages". But you write "legacy mode is a layer over MSI mode".
     
    Last edited: May 31, 2019
    386SX, akbaar, MoKiChU and 1 other person like this.
  12. X7007

    X7007 Ancient Guru

    Messages:
    1,871
    Likes Received:
    71
    GPU:
    ZOTAC 4090 EXT AMP
    Something new I've found out.

    Now with the latest 1903 I can use Max Limit 1024 on my Server Network Card. But SolarFlare manual says to be very careful not to change it only by consulting them first.

    I tried to put it 1024 but I had some randomly weird windows restart ever since I changed it today. Also, I change the Switch Mode from Default to SR-IOV. one of them or both I think causes this weird restarts. It happens when I used Windows Sandbox meaning it had more resources with this.

    [​IMG]
     
    Last edited: Jun 22, 2019
  13. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    @X7007
    Column "max limit" in MSI mode utility comes from some WMI class. It has nothing to do with actual value of limit (column "limit").

    Also if in Device manager you do not see multiple instances of Solarflare adapter with different IRQs ("Resources by type" view) then adaptor do not use multiple MSIs.
     
    MoKiChU likes this.
  14. Hey Mbk as always thanks for updating things and making more comfortable the way we tweaked our systems, I know a girl from Google which is like a machine in fps games, she really goes so depth when comes to tweak her pc and the electricity, latency, etc,etc, she in fact does the MSI mode by manually, i talked her about your Guide, but she still doing manually

    What she mention is that, after setting a MSI pci, is it important to remove the (MessageNumberLimit) from the GPU that will help a lot, she explain me the reason of deleting that thing, but dont quite remember, once shes is connected ill ask her.

    Another thing that seems to have a massive reduction of US latency in Latency mon, is setting the CPU steering to 1, drops from 80us to 7-11.


    Also by the way MBK, is it okay to test the Interrupt priority? cheers.
     
  15. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    Of course it is OK to test Interrupt priority.
     

  16. This may sound weird or probably "placebo" but suddenly on my PS/2 keyboard I started to have issues with ISR +lantecy on the driver i8042prt.sys then I went to regestry i saw the keyboard PS/2 has already a Interrupt Management, then i just created a Key of MessageSignaledInterruptProperties, then added the Dword32 bit of MSISupported 1 , and suddenly I rebooted the pc, and i don't have the issues anymore, the driver was up to 1200-1500 us with red message through the Latency Moon the isr was like 2000, now just reach to 9 us and the isr stays under 300-600.

    May sound Placebo but i confirmed rebooting many times the pc, opening games, doing fast typing test, and is not longer anymore 1000+us
     
  17. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    @x58haze

    1. If device is not connected to PCIe bus, then big "PLACEBO" because MSI mode is about PCI/PCIe bus. (Launch Device manager and switch view from "Devices by type" to "Devices by connection".)
    2. If device has no IRQ in Resources tab in device property dialog (like SM Bus Controller on the picture
    [​IMG]
    ) then another big "PLACEBO" because MSI mode is about interrupt and if you see no IRQ for device then there is no interrupt at all.
     
    Last edited: Jun 27, 2019
    MoKiChU likes this.
  18. @mbk1969
    Thanks for your answer, but it seems the PS/2 has an IRQ
    [​IMG]

    Also i'm kinda tired of doing many test with the Latencies in my Windows, this motherboard that i bought back in 2017 Fatality k4 gaming ab350 has like cracking sound, i can tell , when Microsoft prompt that message of notification when i press Run as administrator and it say (yes or no) well i can hear pop sometimes.

    Also im always putting eyes on latency moon to check latencies or isr+dpc, and suddenly i'm getting issues on the
    Stortport.sys
    [​IMG]

    Here is more detailed. the Storport.sys
    [​IMG]


    I manage to mitigate the issues with the Nvidia Drivers, it used to run at 1500 US, it seems that this Ryzen Processor first generation Ryzen 5 1600 has issues with a segment bug or something, even if i did an RMA telling AMD that my ryzen has the bug, cause it was one of the first ryzen 5 1600 before the Patch 2017 june, cause i bought in february...

    Till the point i had to Disabled the simulthaneous multi-threading to reduce the NVIDIA US from 1500 to 700, then i put the Nvidia drivers on MSI mode, also i uninstalled many Nvidia features by unzip it, and leaving the important files, and yeah it seems to now be running low latencies at 0.200 us which is good sometimes up to 0.400 but not like it used to.. 1500 not anymore..


    But anyway im just kinda annoying testing stuff, electricity, have been doing this since idk.. 2010? also bios modding aff the visual lag still there, and the audio pop seems to be this motherboard...

    Also i can confirm i have an issues with the audio cause i ran the cvar through Command Prompt: Bcdedit /set usefirmwarepcisettings Yes, and when booting the windows, my audio goes (disabled) red X, and lacks of Energy or something i dont quite remember the message but i reported to Asrock and they ignored me lol they dont offert RMA , aff i'm not going to buy a motherboard from that company again x.x
     
  19. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    And the main thing: if IRQ stayed positive after you created MSI registry keys and values (and rebooted) for the PS/2 mouse then nothing has changed, and device still doesn`t use MSI mode.
     
    MoKiChU likes this.
  20. Giglecald

    Giglecald Guest

    Messages:
    1
    Likes Received:
    1
    GPU:
    MSI 1080 Armor OC
    Hi! I was lurking around this thread and saw your post. I don't know if this would apply to you but I've had issues in the past with my 1600X and sound crackling caused by spikes from Storport.sys.

    In my case AMD SATA Drivers were the issue, switched them for the Standard AHCI SATA driver and the spikes were gone.
     
    Deleted member 268800 likes this.

Share This Page