Debian as type 1 hypervisor for gaming, part 1

05 Aug 2020

This is my notes of how I got a Gaming VM up and running on Debian. And I just don’t want to do stuff like others, so I am going to run a second VM for my Linux desktop. Thus leaving my host Debian as a type 1 hypervisor, doing nothing else than serving my virtual machines.

My setup is this:

  • AMD Ryzen 3600 (could have had a bit more cores)
  • MSI B450m Mortar Max
  • 16GB Ram (want more)
  • 512Gb + 240Gb SSD
  • RTX2060 (in top/primary x16 slot)
  • GTX1050Ti

So the first thing is to install Debian, just a basic install of Debian 10 without any fancy Xorg stuff and the usual apt-get update and so on.

Now we need libvirt and qemu-kvm so lets install them

# apt-get install --no-install-recommends -y libvirt-clients qemu-kvm libvirt-daemon-system bridge-utils ovmf

the --no-install-recomends to prevent libvirt Xorg apps to be installed

Read more at Debian WIKI

Enable iommu to be able to passthrough devices, we need to edit /etc/default/grub and add a parameter to the GRUB_CMDLINE_LINUX_DEFAULT variable.

For AMD systems


For Intel systems


Remember to append the paramenters in the quotes to whatever is already there, using space as a separator. Like this

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

then we need to run update-grub and reboot our machine. This may have to be anbled in the UEFI settings as well and this is named differently in different UEFI settings. if you get no output from dmesg | grep IOMMU something is not working.

Network bridge

To let my virtual machines access my LAN like any other machine I created a network bridge. First edit the file /etc/network/interfaces and comment lines for your network card, this is how my file looks like after:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug enp34s0
#iface enp34s0 inet dhcp

The last two lines have been commented out. After that I added a new script in /etc/network/interfaces.d/ folder named br0

# /etc/network/interfaces.d/br0

## DHCP ip config file for br0 ##
auto br0

# Bridge setup
iface br0 inet dhcp
    bridge_ports enp34s0

So now my host OS is ready for networking and to let all VM’s access my LAN.

Enable VFIO

To enable VFIO you need to add a few lines to /etc/modules

# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

After next reboot those modules will be loaded.

Make secondary GPU the primary one

Because I want to passthrough my primary GPU, the RTX2060, I have to make sure my computer boots using my seocndary GPU, the GTX1050ti.

I had to fidle around in the UEFI for some settings to force it to use the second GPU as primary. This will be different for all motherboard manufacturer so I wont go into any details.

Another strange thing is that I must use HDMI cable from my GTX1050ti, if I use display port it does not work and it will use my RTX2060 regardless of the settings in UEFI.

The next step will be to install my first virtual machine.