I spent a really long time trying to get this working recently, so I figured I'd document what I did to get GPU passthrough working on my laptop. The steps I went through might be a bit different on other distros given that I am using Proxmox, but the broad strokes should apply. Bear in mind, this is with regards to using a Windows 11 virtual machine. Certain steps may be different or unnecessary for Linux-based virtual machines.
First, why might you want to do this? Well, the most obvious reason is that virtual machines are slooow. So, by passing through a GPU you can improve its speed considerably. Another possibility would be that you want to use the GPU for some task like GPU transcoding for Plex, or to simply use it as a render host, or you may want to use it for something like AI workloads that rely on the GPU. Alternatively, you may just want to use this to have a virtual machine that you can host Steam on or something like that (bear in mind, some games and applications will not run under virtual machines or run if you are using Remote Desktop).
0. Enable Virtualization-specifi
c settings in the BIOS such as Intel VT-x and VT-d or AMD IOMMU and AMD-V, and disable Safe Boot (After installing your OS of choice if it requires UEFI)
1. Create a virtual machine
- BIOS should be OVMF (UEFI)
- Machine type should be q35
- SCSI Controller should be VirtIO SCSI or SCSI Single; others may work these are just what I have tested
- Display should be VirtIO-GPU (virtio); other display emulators will not work for Proxmox's built-in console VNC, or otherwise cause the VM to crash on launch.
- CPU may need to be of type host and hidden from the VM
2. Edit GRUB config line beginning with "GRUB_CMDLINE_LINUX_DE
- These settings worked for me: "quiet intel_iommu=on iommu=pt pcie_acs_override=down
stream,multifunction nofb nomodeset"
- For AMD CPUs, change 'intel_iommu' to 'amd_iommu'
- Save the changes and then run 'update-grub'
3. Run 'dmesg | grep -e DMAR -e IOMMU'
- You should see a line like "DMAR: IOMMU enabled"
4. Add the following to /etc/modules :vfio
5. Run "dmesg | grep 'remapping'"
- You should see something like the following:"AMD-Vi: Interrupt remapping enabled"
"DMAR-IR: Enabled IRQ remapping in x2apic mode" ('x2apic' can be different on old CPUs, but should still work)
5.1 If not, run "echo "options vfio_iommu_type1 allow_unsafe_interrupt
s=1" > /etc/modprobe.d/iommu_unsafe_interrupt
6. Run "dmesg | grep iommu"
- You need proper IOMMU groups for the PCI device you want to assign to your VM. This means that the GPU isn't arbitrarily grouped with some other PCI devices but has a group of its own. In my case, this returns something like this:5.398008] pci 0000:00:00.0: Adding to iommu group 0
[ 5.398019] pci 0000:00:01.0: Adding to iommu group 1
[ 5.398028] pci 0000:00:02.0: Adding to iommu group 2
[ 5.398038] pci 0000:00:08.0: Adding to iommu group 3
[ 5.398054] pci 0000:00:14.0: Adding to iommu group 4
[ 5.398062] pci 0000:00:14.2: Adding to iommu group 4
[ 5.398076] pci 0000:00:15.0: Adding to iommu group 5
[ 5.398088] pci 0000:00:16.0: Adding to iommu group 6
[ 5.398097] pci 0000:00:17.0: Adding to iommu group 7
[ 5.398108] pci 0000:00:1b.0: Adding to iommu group 8
[ 5.398120] pci 0000:00:1c.0: Adding to iommu group 9
[ 5.398136] pci 0000:00:1c.2: Adding to iommu group 10
[ 5.398148] pci 0000:00:1c.4: Adding to iommu group 11
[ 5.398160] pci 0000:00:1d.0: Adding to iommu group 12
[ 5.398172] pci 0000:00:1d.4: Adding to iommu group 13
[ 5.398197] pci 0000:00:1f.0: Adding to iommu group 14
[ 5.398207] pci 0000:00:1f.2: Adding to iommu group 14
[ 5.398215] pci 0000:00:1f.3: Adding to iommu group 14
[ 5.398224] pci 0000:00:1f.4: Adding to iommu group 14
[ 5.398233] pci 0000:00:1f.6: Adding to iommu group 14
[ 5.398245] pci 0000:01:00.0: Adding to iommu group 15
[ 5.398256] pci 0000:01:00.1: Adding to iommu group 16
[ 5.398267] pci 0000:02:00.0: Adding to iommu group 17
[ 5.398279] pci 0000:04:00.0: Adding to iommu group 18
[ 5.398290] pci 0000:05:00.0: Adding to iommu group 19
[ 5.398313] pci 0000:06:00.0: Adding to iommu group 20
[ 5.398336] pci 0000:06:01.0: Adding to iommu group 21
[ 5.398358] pci 0000:06:02.0: Adding to iommu group 22
[ 5.398382] pci 0000:06:04.0: Adding to iommu group 23
[ 5.398415] pci 0000:3b:00.0: Adding to iommu group 24
[ 5.398427] pci 0000:71:00.0: Adding to iommu group 25
6.1 If you don't have dedicated IOMMU groups, you can add "pcie_acs_override=dow
nstream" to your GRUB launch arguments if you didn't already do that.
7. Run lspci to determine the location of your GPU or other PCI device you want to pass through. It should generally be "01:00.0"
8. Run "lspci -nnk -s 01:00"
- You should see something like this:01:00.0 3D controller : NVIDIA Corporation GP104GLM [Quadro P4000 Mobile] [10de:1bb7] (rev a1)
Subsystem: Lenovo GP104GLM [Quadro P4000 Mobile] [17aa:224c]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device : NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
- The first 4 characters designate the Vendor ID, in this case "10de" represents Nvidia. The second 4 characters after the colon represent the Device ID, in this case "1bb7" represents an Nvidia Quadro P4000
9. (Proxmox-specific, but generally applies) Add a PCI Device under Hardware for you virtual machine
- Select the ID for your Device, enabling "All Functions", "Primary GPU", "ROM-Bar", and "PCI-Express"
- Fill in the Vendor ID, Device ID, Sub-Vendor ID, and Sub-Device ID. In my case, the Vendor ID and Device ID are: "0x10de" and "0x1bb7" and the Sub-Vendor ID and Sub-Device ID are: "17aa" and "224c"
- If you edit the virtual machine config file located at "/etc/pve/qemu-server/vmid.conf" (replace vmid.conf with your Virtual Machine ID, like 101.conf), that would look like hostpci0: 0000:01:00,device-id=0x1bb7,pcie=1,sub-device-id=0x224c,sub-vendor-id=0x17aa,vendor-id=0x10de,x-vga=1
10. Run the following, making sure to replace the IDs with the IDs for your specific GPU or PCI device.echo "options vfio-pci ids=10de:1bb7,10de:10f0 disable_vga=1" > /etc/modprobe.d/vfio.conf
11. Disable GPU drivers so that the host machine does not try to use the GPU by running the following:echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
12. (Nvidia-specific) Run the following to prevent applications from crashing the virtual machine:echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
12.1 You may want to add "report_ignored_msrs=0
" if you see a lot of warnings in your dmesg system log
12.2 Kepler K80 GPUs require the following in the vmid.conf:args: -machine pc,max-ram-below-4g=1G
13. Run the following:echo "softdep nouveau pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf
echo "softdep nvidia pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf
echo "softdep nvidia* pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf
14. [Skip this step unless you have errors beyond this point] Note: At this point, you may read that you might require dumping your GPU's vBIOS. In my experience, this was completely unnecessary and above all did not work. Specific instructions in other guides may be like the following:cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /usr/share/kvm/vbios.bin
echo 0 > rom
In my experience, attempting to run "cat rom > /usr/share/kvm/vbios.bin" would result in an Input/Output error and the vBIOS would not be able to be dumped. If you really do end up needing to dump the vBIOS, I would strongly recommend installing Windows onto your host machine and then installing and running GPU-z. GPU-z has a "share" button that allows you to easily dump the vBIOS for your GPU.
To add the vBIOS to your virtual machine, place the place your vBIOS file that you dumped at "/usr/share/kvm/" and then add ",romfile=vbios.bin" to your vmid.conf for your PCI device (replacing vbios.bin with the name of your vBIOS file you dumped). That would look something like the following:hostpci0: 0000:01:00,device-id=0x1bb7,pcie=1,sub-device-id=0x224c,sub-vendor-id=0x17aa,vendor-id=0x10de,x-vga=1,romfile=vbios.bin
15. Reboot. At this point, when you start your virtual machine you should be able to see in Windows Device Manager that your GPU was detected under display adapters. At this point, try installing your GPU device drivers and then reboot your virtual machine once they've installed. If all goes well, you should have a functioning GPU passed through to your virtual machine. If not... You'll likely see "Code 43" under the properties for your GPU in Device Manager.
16. Going back to your vmid.conf add the following to your cpu options ",hidden=1,flags=+pcid
", you should have a line that looks like this: cpu: host,hidden=1,flags=+pcid
17. Nvidia drivers can be very picky. You may need to add an ACPI table to emulate having a battery. You can do this by downloading this
and then adding it to your vmid.conf by adding a line like so:args: -acpitable file="/root/ssdt1.dat"
18. If you're still having a code 43 issue, you can go back to step 14 and try adding your vBIOS.
At this point, you're done. Your virtual machine should be successfully detecting your GPU or PCI device and you should be able to use it mostly normally. For obvious reasons, you may still not be able to run all programs as you would like due to running them under a virtual machine, however, the main core functionality of the GPU or PCI device should be fully accessible to the virtual machine.
A few of my resources:https://pve.proxmox.com/wiki/PCI_Passthroughhttps://gist.github.com/Misairu-G/616f7b2756c488148b7309addc940b28#update-attention-for-muxless-laptophttps://lantian.pub/en/article/modify-computer/laptop-intel-nvidia-optimus-passthrough.lantian/https://forum.proxmox.com/threads/successful-experience-with-laptop-gpu-passthrough.95683/