Monthly Archives: March 2021

How to run nested ESXi 7 on QNAP’s Virtualization Station

**Important update at the end that should be read prior to wasting your time.

This weekend I found myself in need of an additional ESXi host so instead of acquiring new hardware I figured I might as well run it nested on my beefy QNAP TVS-h1288X with its Xeon CPU and 72GB of RAM. I already use the QEMU-based Virtualization Station (VS) for hosting my primary domain controller and it’s my go-to host for spinning up my ONTAP simulators so I figured nesting an ESXi VM shouldn’t be that difficult. What I hadn’t taken into account however is the fact that VMware has deprecated the VMKlinux Driver Stack, removing support for all of the NICs VS makes available to you in the GUI while provisioning new virtual machines. At first I researched injecting drivers or rolling my own installation ISO but these seemed overly complicated and somewhat outdated in their documentation. Instead I decided to get inside of VS and see if I could do something from that angle, it was after all simply their own version of QEMU.

I started the installation process, but it wasn’t long before I received this error message:

ESXi 7 No Network Adapters error message

I shut down the VM, and changed the NIC type over and over eventually exhausting the five possibilities presented in the VS GUI:

Not even the trusty old e1000 NIC, listed as Intel Gigabit Ethernet above worked…Over to the CLI I went. Some Googling around on the subject lead me to believe there was a command that would produce a list of supported virtualized devices, but the commands I was finding were for native KVM/QEMU installs and not intended for VS so I poked around and came across the qemu-system-x86_64 command, and when I ran it with the parameters -device help and it produced the following, abbreviated list:

./qemu-system-x86_64 -device help
[VL] This is a NROMAL VM
Controller/Bridge/Hub devices:
name "i82801b11-bridge", bus PCI
................<SNIP>
Network devices:
name "e1000", bus PCI, alias "e1000-82540em", desc "Intel Gigabit Ethernet"
name "e1000-82544gc", bus PCI, desc "Intel Gigabit Ethernet"
................<SNIP>
name "vmxnet3", bus PCI, desc "VMWare Paravirtualized Ethernet v3"
................<SNIP>

That last line is exactly what I was looking for, this lead me to believe that QEMU should be able to support the VMXNET3 network device so I cd’d over to the .qpkg/QKVM/usr/etc/libvirt/qemu directory and opened up the XML file associated with my ESXi VM and changed the following sections:

<interface type='bridge'>
      <mac address='00:50:56:af:30:fe'/>
      <source bridge='qvs0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

to:

<interface type='bridge'>
      <mac address='00:50:56:af:30:fe'/>
      <source bridge='qvs0'/>
      <model type='vmxnet3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>

I saved the file and for good measure I also restarted VS. I booted the VM, and I received the same error message as above. This time I cc’d over to .qpkg/QKVM/var/run/libvirt/qemu and had a look at the XML file that represented the running config of the VM, and the NIC was still set to e1000. It took me a bit of hacking around to determine that in order to make this change persistent, I needed to edit the XML file using:

virsh edit 122d6cbc-b47c-4c18-b783-697397be149b

That last string of text being the UUID of the VM in question. If you’re unsure of what the UUID is of a given VM, simply grep “qvs:name” from all the XML files in the .qpkg/QKVM/usr/etc/libvirt/qemu directory. I made the same change as I had previously, exited the editor and booted the VM once again…This time, success! My ESXi 7.0u2 host booted fine and didn’t complain about the network. I went through the configuration and it is now up and running fine. The GUI still lists the NIC as Intel Gigabit Ethernet.

I’m reluctant to make any changes to the VM using the GUI at this time for fear of the NIC information changing, but I’m okay not using the GUI if it means being able to nest ESXi 7 on Virtualization Station for testing purposes.

**Update: While the ESXi 7.0u2 VM would boot find, I have been unable to actually add it to my vCenter server. I tried running the VM on my physical ESXi host and was able to add it to vCenter, then I powered down the ESXi VM and imported it into VS. The import worked, but then it showed as disconnected from vCenter. Next I tried importing vCenter into the virtualized ESXi host, but that won’t boot as VS isn’t presenting the VT-x flag even though I have CPU passthrough enabled. I’m still going to try and get this going, but won’t have time to devote to troubleshooting VS for a couple of days.