Linux gaming with P106-100

Here’s what I got from taobao

This post contains a small guide on how to play games and run other graphics payloads using a dirt-cheap NVIDIA P106-100 (Which is a mining-only version of NVIDIA GTX1060 that you can get for less than 100$) in Linux in (optionally) virtualized environment, making it a nearly perfect solution for a headless gaming server. Yep, simple (or not) as that.

The story so far

TL; DR: Not interested in the story, skip to next section.

Somewhere about the end of 2018, the news hit that one can easily play games with mining-only graphics cards, namely with a P106-100 device, that could be grabbed cheaply at taobao and other Chinese electronics junkyards for something like 499 yuan. Since that time price went up, but is still a more or less good deal, something around 100$.

So the hack was simple:

  • Plug the P106-100 into a PC with some recent intel integrated graphics
  • Edit the .inf files of the drivers package, convincing the drivers that it’s a 1060 card.
  • Disable the driver signing
  • Load the modded driver
  • Set apps to use P106-100 for rendering as we on those optimus laptops.
  • PROFIT

That was covered by Linus at Linus Tech Tips. Later, a very handy guy DoctorVGA at Linus’ forum transplanted the GPU from P106-100 onto the 1060 PCB. Chances were, that some strap resistors on the PCB were telling the chip to be P106-100, like it was on previous generations. That hypothesis didn’t turn out to be true.

The next step was a simple registry hack that did the same, but no need to disable the driver signing. Well, that sounds better.

An even more weird solution – a real GTX1060 + P106-100 with a hack that enables SLI. Gives something like 160% fps or so.

And since I like powerful chips at low prices, I couldn’t keep still and immediately got myself two P106-100 to play with. If I’m lucky, I can play a few games on it. If not, perhaps they’ll make a perfect rig to try out machine learning and neural networks.

Let’s try that with proxmox (error 43)!

So I got my two dusty ugly P106-100 cards and decided to give them a spin. In a VM. I wanted to use it in a virtualized environment in the long run, and NVidia doesn’t really like that. For those, who do not know, Nvidia drivers are rigged to give you error 43 or BSOD your windows right away if they detect being run in VM. qemu developers consistently improve and add options to hide from nvidia driver, while nvidia devs are doing the opposite stuff. There are lots of threads at proxmox forums, reddit and arch wiki.

Welcome to the circus, enjoy the clowns!

I use proxmox as my hypervisor. For those who do not know, proxmox is a debian-based hypervisor that manages virtual machines and allows any geek to spin tons of enterprise-level stuff in his/her basement. I switched to it a while ago, since my experiments required more and more of disposable enviroments.

Virtualized environment is very different from ‘real’. When we’re running in a virtual machine, we don’t usually have an ‘integrated’ card. We have a virtual QXL video card that doesn’t support the features required for Optimus.

Still, I gave it a try and that’s what have I learned:

  • NVIDIA drivers work fine with P106-100 right to the very moment you enable the hacks to run graphic payloads. This is where error 43 kicks in nd you have to fight it off. So apparently they DO allow PCI Pass-through for mining!
  • Even if we apply all the hacks and black magic to get rid of the dreaded error 43, no windows VM drivers like qxl, vmware, etc. support the features for Optimus, so no gaming is possible.
  • I picked the first cryptocurrency that had a one-click miner for windows (Vertcoin) and tried mining. That gave me something 18.7 Mh/s, once the card warmed up.

Let’s get to Linux

I ditched the windows virtual machine for good and started from scratch with a linux virtual machine to see if I will have any more luck here. And it looks that I did! The following was tested in Debian Buster, but with minor adjustments should work with other stuff as well.

First of all, I gave bumblebee/primus a spin. They never worked out and refused to recognize qxl as an integrated graphics card and I was to lazy to see if I could ever patch them.

Plan B, Can we start the X11 server on the P106-100? Turns out we can. At first things didn’t play well, but once I specified the PCI Bus ID for the card, magic happened, the server successfully started.

From this point on, the story turns into a tutorial.

First things first – enable contrib and non-free repositories for your debian buster. Edit your /etc/apt/sources.list to look like this:


deb http://deb.debian.org/debian/ buster main contrib non-free
deb-src http://deb.debian.org/debian/ buster main contrib non-free

deb http://security.debian.org/debian-security buster/updates main
deb-src http://security.debian.org/debian-security buster/updates main

# buster-updates, previously known as 'volatile'
deb http://deb.debian.org/debian/ buster-updates main
deb-src http://deb.debian.org/debian/ buster-updates main

Next just run sudo apt update and sudo apt install nvidia-driver and go grab some tea of your internets aren’t fast. Next we’ll need a proper Xorg.conf

My /etc/X11/xorg.conf (Create it if it doesn’t exist!) is below:


Section "ServerLayout"
    Identifier     "Default Layout"
    Screen         "Default Screen" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection


Section "Device"
    Identifier "XSPICE"
    Driver "qxl"
EndSection

Section "InputDevice"
    Identifier "Mouse0"
    Driver     "xspice pointer"
EndSection

Section "InputDevice"
    Identifier "Keyboard0"
    Driver     "xspice keyboard"
EndSection

Section "Device"
    Identifier     "NV0"
    Driver         "nvidia"
    BusID          "PCI:01:00:0"
EndSection


Section "Monitor"
    Identifier    "Monitor0"
EndSection

Section "Screen"
    Identifier     "Default Screen"
    Device         "NV0"
    Option         "ProbeAllGpus" "False"
    Monitor        "Monitor0"
    Option         "NoLogo" "True"
    SubSection     "Display"
        Virtual 1920 1080
        Depth 24
    EndSubSection
EndSection

Edit it. You’ll need to consult lspci to fill in the BusID “PCI:01:00:0” line. Theoretically, you can remove all qxl/spice stuff, these are leftover from my experiments. Also mind the Virtual 1920 1080 line. We’ll need it to get proper resolution.

After that edits just restart your display manager (e.g. gdm, sddm, etc). I use KDE that now comes with sddm.

sudo systemctl restart sddm.service

So, did the server start? We can check it by running:

necromant@testblade:~$ ps aux|grep X
root       679  4.5  0.5 241572 92112 tty7     Ssl+ 15:36  16:50 /usr/lib/xorg/Xorg -nolisten tcp -auth /var/run/sddm/{6682c6e4-de26-4451-8e7e-30b28df65bd7} -background none -noreset -displayfd 17 -seat seat0 vt7

And by checking the /var/log/Xorg.0.log file. Grep’ing NVIDIA lines is usually most helpful. In my case I got:

[    13.823] (**) |   |-->Device "NV0"
[    13.823] (**) |   |-->GPUDevice "NV0"
[    14.790] (II) Module nvidia: vendor="NVIDIA Corporation"
[    14.806] (II) NVIDIA dlloader X Driver  418.74  Wed May  1 11:26:02 CDT 2019
[    14.806] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[    14.932] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32
[    14.932] (==) NVIDIA(0): RGB weight 888
[    14.932] (==) NVIDIA(0): Default visual is TrueColor
[    14.932] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
[    14.932] (**) NVIDIA(0): Option "ProbeAllGpus" "False"
[    14.933] (**) NVIDIA(0): Enabling 2D acceleration
[    15.303] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation"
[    15.304] (II) NVIDIA GLX Module  418.74  Wed May  1 11:24:49 CDT 2019
[    15.339] (II) NVIDIA(0): NVIDIA GPU P106-100 (GP106-A) at PCI:1:0:0 (GPU-0)
[    15.339] (--) NVIDIA(0): Memory: 6291456 kBytes
[    15.339] (--) NVIDIA(0): VideoBIOS: 86.06.58.00.1c
[    15.339] (II) NVIDIA(0): Detected PCI Express Link width: 16X
[    15.339] (II) NVIDIA(0): Validated MetaModes:
[    15.339] (II) NVIDIA(0):     "NULL"
[    15.339] (**) NVIDIA(0): Virtual screen size configured to be 1920 x 1080
[    15.339] (WW) NVIDIA(0): Unable to get display device for DPI computation.
[    15.339] (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default
[    15.340] (II) NVIDIA: Using 24576.00 MB of virtual memory for indirect memory
[    15.341] (II) NVIDIA:     access.
[    15.349] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon
[    15.349] (II) NVIDIA(0):     may not be running or the "AcpidSocketPath" X
[    15.349] (II) NVIDIA(0):     configuration option may not be set correctly.  When the
[    15.349] (II) NVIDIA(0):     ACPI event daemon is available, the NVIDIA X driver will
[    15.349] (II) NVIDIA(0):     try to use it to receive ACPI event notifications.  For
[    15.349] (II) NVIDIA(0):     details, please see the "ConnectToAcpid" and
[    15.349] (II) NVIDIA(0):     "AcpidSocketPath" X configuration options in Appendix B: X
[    15.349] (II) NVIDIA(0):     Config Options in the README.
[    15.395] (II) NVIDIA(0): Setting mode "NULL"
[    15.424] (==) NVIDIA(0): Disabling shared memory pixmaps
[    15.424] (==) NVIDIA(0): Backing store enabled
[    15.424] (==) NVIDIA(0): Silken mouse enabled
[    15.425] (==) NVIDIA(0): DPMS enabled
[    15.435] (WW) NVIDIA(0): Option "NoLogo" is not used
[    15.436] (II) NVIDIA(0): [DRI2] Setup complete
[    15.436] (II) NVIDIA(0): [DRI2]   VDPAU driver: nvidia
[    15.438] (II) Initializing extension NV-GLX
[    15.438] (II) Initializing extension NV-CONTROL

So the server is running and we even got GLX working, but we don’t see a thing and the spice display is blank. What next?

Next goes our friend called x11vnc. Let’s install it (sudo apt install x11vnc) and finally see the ‘big picture’. Here’s how to start it. First find out the path to auth file (in bold)

necromant@testblade:~$ ps aux|grep Xorg
root 679 4.3 0.5 241572 92112 tty7 Ssl+ 15:36 16:51 /usr/lib/xorg/Xorg -nolisten tcp -auth /var/run/sddm/{6682c6e4-de26-4451-8e7e-30b28df65bd7} -background none -noreset -displayfd 17 -seat seat0 vt7

Then use it as an argument to x11vnc:

necromant@testblade:~$ x11vnc -ncache 10 -clip 1920x1080+0+0 -display :0 -auth /var/run/sddm/{6682c6e4-de26-4451-8e7e-30b28df65bd7} 

At this point we can use our favorite VNC viewer and access the desktop that has full OpenGL support. Now we can install steam and play many games (including windows titles using proton compatibility tool). VNC doesn’t work well for games, but steam in-home streaming/remote play gets the job done.

The only thing left was starting and restarting the x11vnc server automatically. I created the following wrapper script and saved it as /usr/local/startvnc

#!/bin/bash
auth=`ls /var/run/sddm`
x11vnc  -ncache 10 -clip 1920x1080+0+0 -display :0 -auth /var/run/sddm/$auth

And created the following systemd unit at /etc/systemd/system/x11vnc.service

[Unit]
Description=VNC
After=sddm.service

[Service]
ExecStart=/usr/local/bin/startvnc
User=root
Group=root
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Not very secure, since the VNC is accesible by everyone, but since this only runs inside my LAN it gets the job done. Paranoics, please refer to x11vnc official docs and don’t nag in the comments ๐Ÿ˜‰

So, what did we get in the end?

  • CUDA/Mining in linux VM is way faster that in Windows VM. I threw a benchmark with the very same ccminer mining vertcoins and got 21.9Mh/s in linux VM as opposed to 18.7Mh/s in a windows VM. Windows version also leaks memory.
  • OpenGL/gaming works with absolutely no driver patching in linux. No problem with GPU passthrough. We just need to use x11vnc to attach to a running server and install steam.
  • NvENC/VDPAU doesn’t work at all (vdpauinfo thows a GPU at BusId 0x1 doesn’t have a supported video decoder error. Perhaps driver patches may make it work). So you’ll need a decent CPU to do all the h264 compression stuff
  • SLI didn’t work either. I really think, it’s even more possible than NvEnc.

P.S. Thanks fly out to folks at linux.org.ru forum told me about x11vnc.

10 thoughts on “Linux gaming with P106-100

  1. this is great, thanks for the writeup. i looked around a fair bit and can’t find another writeup on using the p106 *on linux* at all. did you ever experiment with making it a display card in your daily computer, not just your virt host for VMs?

    1. Nope, I haven’t. I don’t have a box with an IGP that can be used for that. My next experiment would be trying out virtio-gpu on P106-100. e.g. sharing that card between several VMs at the same time.

      1. i have a few extra workstation computers around that i bought to test some clustering stuff with; they all have haswell or later i7’s in them so i do have a spare machine or two with OK igp’s. i am keen to see if i can get optimus working to game reasonably via the igpu. if you’d be open to giving me a tip or two along the way i’d be obliged.

        i am also keen as beans to do virtio-gpu because i have a number of desktop VMs that would benefit from that!

  2. just following up here…https://www.reddit.com/r/linux_gaming/comments/hw3pfg/can_people_test_and_confirm_this_gaming_just/

    i threw a P106-090 into a recently-retired haswell box i have sitting here and installed Pop_OS 20.04 (hoping their heavy focus on hybrid graphics would help) and lo and behold…it seems to have *just worked*. I did nothing special, literlaly just installed linux and ran the Unigine Heaven benchmark – it clearly says it’s rendering using the P106 and I’m getting ~1000-1100 on the Extreme preset.

    i wouldn’t have purchased the card and attempted this without your encouraging blog post, so thank you!

    1. For some reason I’ve lost the wordpress notification about this comment ;(

      Anyways, congrats! You’re welcome ๐Ÿ˜‰

      I’m trying to get it working with virtio-gpu, but that’s a totally different and a more cumbersome story that I’ll post later if it works out.

  3. hey just checking in on whether you got anywhere with virtio-gpu and these cards?

    i have now attempted to pass this same card through to a popos vm (via proxmox and its convenient passthrough thing) but the virtual gpus i’ve tried so far on the visible screen don’t play with PRIME/hybrid graphics.

    thanks again

    1. Hi. I haven’t had any luck with virtio-gpu, but the more recent hack called vgpu-unloxk looks way better. I even managed to make a gaming windows vm on my p106-100. But the absence of nvenc spoils all the fun.

      1. Being a little late to the party but very interested in getting this to work. I own one P106-100 and two p104-100. Do you think you have time to give some hints on how to get vgpu-unlock to work with these cards?

        1. Sorry, very busy lately. But, good news – it should be straightforward. Just follow the instructions in the vgpu-unlock repo and you should be up and running quickly.

  4. May I ask do you igpu or apu for this? If so do you still using this, how was the gaming lerformance on indie title? What to gpu passthrough with a fedora server. Now, in my country 3G cost about 10 dollars, and 6G version cost aboud 20 dollars.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.