Here you can see my current gaming setup, and how I’m reached my primary goal: Gaming in a VM.
If you have not read my previous articles about Why I want to play in a VM, and How you can reach such a goal, I highly recommend to do so before you continue reading this article! As this is just one possible – but working – example setup.
I must emphasize that none of the parts are donated to me, but all bought from my hard earned personal money. So I’m not affiliated by any of those hardware vendors, neither of the shops I bought them from. I only share the exact product details, because I can’t promise that any other ‘similar one’ would work the same way.
The Hardware
I’m not gonna list every piece, but the ones might have some influence regarding the success of this project.
The shared parts:
- ASRock B850 Pro-A (BIOS v3.30)
The most important one, as this should provide the proper IOMMU Groups, and connects all the components with the highest speed possible. It is surely not the best out there, but an affordable one. The important details are:
-
- PCIe Gen5 support – for the GPU
- Dual Channel DDR5 – with 4 memory slots
- HDMI + USB-C with DP-alt mode – to be able to connect 2 displays
- 3x M.2 NVMe slots
- Ryzen 7 9700X
You can chose a ‘better’ one, depending on your needs (and your budget). But the important details are:
-
- PCIe Gen5 support
- DDR5
- Integrated GPU – with USB-C DP-alt mode
Keep in mind, that even if the motherboard has 4 memory slots, this CPU is supports only 2 if you want to drive them with the maximum rated speed (5600). As soon as you put 4 modules, they gonna be downclocked to 3600. And I have found out this limitation by the hard way – although, the official data sheet clearly state this.
- Kingston 64GB / 5600MHz Fury Beast DDR5 Black KIT
As I mentioned above: stick with 2 modules, unless your CPU is really able to handle more!
- Samsung 980 PRO 1TB M.2 NVMe
That’s an ‘old’ one migrated from my previous build, but it is still provide enough I/O and speed for virtualisation.
- Keyboard + Mouse
The models used here are surely not influencing the project, the only important detail that you need a separate set for your Host OS – unless you are using a KVM switch.
- Dell U2311H
The model here is completely irrelevant, I’m just proud to have this for 10+ years now :) This is the permanent Host OS display (connected to the iGPU by HDMI), and it is also serving me very well for photo editing.
The dedicated parts for the Gaming VM:
- ASUS DUAL GeForce RTX 5070 12G OC
The centerpiece – dedicated to the VM by PCI Passtrough
- Samsung 990 PRO 2TB M.2 NVMe
This was my current ‘best choice’, and it’s also gonna be dedicated to the VM via PCI Passtrough
- LG UltraWide 29WQ600-W
I simply can’t fit any bigger screen in my gaming corner. The 21:9 aspect ratio is golden, the 100Hz is a bare minimum for gaming, and it has 3 different inputs, so I use this as an external monitor for my work laptop (HDMI), as a 2nd screen of my Host OS (USB-C DP-alt) and for the Gaming VM (via DP)
Switching between the inputs are really convenient – but you can also use a KVM switch if you prefer.
It is also have it’s internal speakers, but I’m not really using them, as – let’s face it – they are crappy.
Regarding it’s resolution: If you would like to go higher, you might end up (much) lower framerates… or you would need a more powerful GPU and CPU combo – what I can’t really afford. So it is conveniently fits as a balanced part of my setup.
- Logitech G300s + Razer Tartarus V2
As the gaming is not just about the framerates, but you need proper input devices too! As booth require it’s own software to ‘handle’ the extra buttons, I passing these to the VM with the entire USB hub as a PCI device. This method is required anyway, as some games are using the mouse in a very specific way, where emulated input devices are failing to work.
- Thrustmaster T300 RS + modded G25 Pedals
As I’m playing racing games too, where the keyboard/mouse/gamepad are not the preferred input method… These are however can be passed ‘dynamically’ as simple USB devices, there is no need to dedicate them to the Gaming VM. And this can be useful if most of your USB connectors are attached to the same USB hub.
IOMMU and USB groupings
And here comes the tricky part to find out how the vendor was implemented the IOMMU Groups, and the USB physical connector assignments.
Some practical advice about why these are so important:
- Your GPU should be alone in a group.
As you must assign all the PCI devices from the same group in order to be successful.
In case of the NVIDIA cards, there is an additional sound device related to the GPU, so you must be assign booth when you PCI Passtrough the GPU.
- It is wise to install all your (PCI) devices (SSD, VGA, etc) before you start.
As you insert any extra PCI device, the PCI ID’s, and the groupings are also changing! Yes, that means the goup numbers are depending on how many PCI devices you have. But at least the ‘go together’ devices are remaining the same. So if your GPU is alone in a group – like in case of this board – it will be always alone, but under a different ‘group number’, and might be with a different PCI ID!
Yes, this means if you already assigned some PCI devices to a VM, then insert a new nvme SSD – for example – then your VM’s will be messed up badly. And might will not start in practice.
- If you need to PCI Passtrough an USB hub, you should be aware which physical connectors are internally connected to that particular hub.
And the challenge is, that these information are not provided by the vendors. You have to find out yourself.
For the IOMMU groupings, there are some helper script published, but you need to manually plug something to every USB connector (and noting the result from lsusb) to find out the internal wiring… Then you also need to find the corresponding PCI id of those USB Hubs… And also check if those are not (IOMMU) grouped with others.
So you must end up with something like you see in the pictures above in order to successfully prepare your Gaming VM – as you very likely using different hardware components.
The Host
I’m used the latest Ubuntu LTS release which is 24.04 at the installation time. But you can use any other Linux distribution, as long as they have the corresponding features and packages required for the task :)
- You might need to install some additional packages that provides virtualisation features:
sudo apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients virt-manager ovmf
- Then you need to add your user to the relevant groups:
sudo usermod -a -G kvm <myusername> sudo usermod -a -G libvirt <myusername>
- Hide the PCI devices that needs to be passed to the VM:
## grub parameters: vfio_pci.ids=01:00.0,01:00.1 ## driverctl sudo driverctl set-override 0000:01:00.0 vfio-pci sudo driverctl set-override 0000:01:00.1 vfio-pci sudo driverctl set-override 0000:06:00.0 vfio-pci sudo driverctl set-override 0000:0d:00.4 vfio-pci
Not sure if booth method is needed, you might try the driverctl method first, and see if that’s enough. For me it was required to mask the GPU using the kernel parameters above.
A reboot is always required for these to take effect.
The end goal is to use the vifo-pci drivers for the PCI devices going to be passed trough to the VM:
# lspci -k 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2f04 (rev a1) Subsystem: ASUSTeK Computer Inc. Device 8a23 Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau 01:00.1 Audio device: NVIDIA Corporation Device 2f80 (rev a1) Subsystem: NVIDIA Corporation Device 0000 Kernel driver in use: vfio-pci Kernel modules: snd_hda_intel 06:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal] Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal] Kernel driver in use: vfio-pci Kernel modules: nvme 0d:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b7 Subsystem: ASRock Incorporation Device 15b6 Kernel driver in use: vfio-pci
The Guest
I have chosen the Windows 11 IoT LTS 2024 version as the guest OS, as this is the least annoying and the least intrusive – still supported – windows edition today. Even if you chose any other, you need to have the relevant .iso image prepared.
In addition to this, I’m prepared the latest VirtIO drivers, as it is required for some features I used.
No other customisation is needed, as it will just run happily inside it’s VM. But of course, you will need some games too! ;)
The Virtual Machine
Now you can create your Gaming VM with the Virtual Macnine Manager, but you might need to manually edit the XML config (delete/add whole sections) to achieve the end results:
As you can see in the pictures, make sure you properly set:
- Use the OVMF UEFI loader
- The correct CPU Topology
- Do not enable shared memory
- To use the VirtIO disk type
- Disable the ROM BAR for the NVIDIA PCI devices.
Another set of practical advice:
- Install your Windows without any assigned PCI devices (including your GPU), using he default virtual console.
- Enable the Remote Desktop Access, before you remove the default video device(s) from your VM, as it can serve a backup access.
- Create snapshots/clones before the bigger changes inside the VM – like GPU driver installation.
Is it worth the effort? – you might ask… Surely it is for me!
Am I suggesting you to try the same? Not at all! – unless you are happy to accept such challenge… :)