**Important update at the end that should be read prior to wasting your time.
This weekend I found myself in need of an additional ESXi host so instead of acquiring new hardware I figured I might as well run it nested on my beefy QNAP TVS-h1288X with its Xeon CPU and 72GB of RAM. I already use the QEMU-based Virtualization Station (VS) for hosting my primary domain controller and it’s my go-to host for spinning up my ONTAP simulators so I figured nesting an ESXi VM shouldn’t be that difficult. What I hadn’t taken into account however is the fact that VMware has deprecated the VMKlinux Driver Stack, removing support for all of the NICs VS makes available to you in the GUI while provisioning new virtual machines. At first I researched injecting drivers or rolling my own installation ISO but these seemed overly complicated and somewhat outdated in their documentation. Instead I decided to get inside of VS and see if I could do something from that angle, it was after all simply their own version of QEMU.
I started the installation process, but it wasn’t long before I received this error message:
I shut down the VM, and changed the NIC type over and over eventually exhausting the five possibilities presented in the VS GUI:
Not even the trusty old e1000 NIC, listed as Intel Gigabit Ethernet above worked…Over to the CLI I went. Some Googling around on the subject lead me to believe there was a command that would produce a list of supported virtualized devices, but the commands I was finding were for native KVM/QEMU installs and not intended for VS so I poked around and came across the qemu-system-x86_64 command, and when I ran it with the parameters -device help and it produced the following, abbreviated list:
./qemu-system-x86_64 -device help
[VL] This is a NROMAL VM
Controller/Bridge/Hub devices:
name "i82801b11-bridge", bus PCI
................<SNIP>
Network devices:
name "e1000", bus PCI, alias "e1000-82540em", desc "Intel Gigabit Ethernet"
name "e1000-82544gc", bus PCI, desc "Intel Gigabit Ethernet"
................<SNIP>
name "vmxnet3", bus PCI, desc "VMWare Paravirtualized Ethernet v3"
................<SNIP>
That last line is exactly what I was looking for, this lead me to believe that QEMU should be able to support the VMXNET3 network device so I cd’d over to the .qpkg/QKVM/usr/etc/libvirt/qemu directory and opened up the XML file associated with my ESXi VM and changed the following sections:
<interface type='bridge'>
<mac address='00:50:56:af:30:fe'/>
<source bridge='qvs0'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
to:
<interface type='bridge'>
<mac address='00:50:56:af:30:fe'/>
<source bridge='qvs0'/>
<model type='vmxnet3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
I saved the file and for good measure I also restarted VS. I booted the VM, and I received the same error message as above. This time I cc’d over to .qpkg/QKVM/var/run/libvirt/qemu and had a look at the XML file that represented the running config of the VM, and the NIC was still set to e1000. It took me a bit of hacking around to determine that in order to make this change persistent, I needed to edit the XML file using:
virsh edit 122d6cbc-b47c-4c18-b783-697397be149b
That last string of text being the UUID of the VM in question. If you’re unsure of what the UUID is of a given VM, simply grep “qvs:name” from all the XML files in the .qpkg/QKVM/usr/etc/libvirt/qemu directory. I made the same change as I had previously, exited the editor and booted the VM once again…This time, success! My ESXi 7.0u2 host booted fine and didn’t complain about the network. I went through the configuration and it is now up and running fine. The GUI still lists the NIC as Intel Gigabit Ethernet.
I’m reluctant to make any changes to the VM using the GUI at this time for fear of the NIC information changing, but I’m okay not using the GUI if it means being able to nest ESXi 7 on Virtualization Station for testing purposes.
**Update: While the ESXi 7.0u2 VM would boot find, I have been unable to actually add it to my vCenter server. I tried running the VM on my physical ESXi host and was able to add it to vCenter, then I powered down the ESXi VM and imported it into VS. The import worked, but then it showed as disconnected from vCenter. Next I tried importing vCenter into the virtualized ESXi host, but that won’t boot as VS isn’t presenting the VT-x flag even though I have CPU passthrough enabled. I’m still going to try and get this going, but won’t have time to devote to troubleshooting VS for a couple of days.
Not sure if you’ve seen this:
https://www.qnap.com/en/how-to/faq/article/how-to-enable-intel-vtx-and-amd-svm
…I’m going through the same challenges you are currently. Trying to demo/evaluate Proxmox & ESXi and couldn’t get QVS to pass the VT-x tag on my TS-453Be…
Unfortunately I had already gone down that avenue, but VTX was already enabled by default on my TVS-h1288X and I haven’t done any further looking. If you figure it out, please let me know.
Thanks Chris – your post got me further as I am having some of the the same issues:
I had a 6.7 VM already on my VC and had issues with the NIC losing connection when running a nested Windows guest. This would fault the nic at the ESX level.
Couple of things so far:
Need to set the qvs:nics section to vmxnet3 as well (via virsh edit).
I dont need to restart virtualization station but as you mentioned the GUI does appear to reset the VM descriptors, so stay out of that.
I had a 6.7 VM already connected to my VC – and have the same issues adding the vmxnet3 to that VM – could not connect to the VC once I changed the nic type. However the ESX was directly connectable via web/443 – so was able to get some logs.
What I find is the uplink bindings failing:
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Failed to set driver for 0x781b4305135c823e: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.: Sysinfo error: Not foundSee VMkernel log for details.
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Error binding driver vmkernel for bus=logical addr=pci#s00000006.00#0 id=com.vmware.uplink
2021-07-31T07:32:25Z vmkdevmgr[2097590]: ADD event for bus=logical addr=pci#s00000007.00#0 id=com.vmware.uplink.
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Found driver vmkernel for device bus=logical addr=pci#s00000007.00#0 id=com.vmware.uplink.
2021-07-31T07:32:25Z vmkdevmgr[2097590]: GetPciAncestorAlias: generating crosscall for deviceID 0x11a4305135c6b48addr s00000007.00 id 15ad07b015ad07b0020000
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Set alias for device 0x42db4305135c83b8
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Failed to set driver for 0x42db4305135c83b8: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.: Sysinfo error: Not foundSee VMkernel log for details.
2021-07-31T07:32:25Z vmkdevmgr[2097590]: Error binding driver vmkernel for bus=logical addr=pci#s00000007.00#0 id=com.vmware.uplink
This breaks/or as well the HBA binding breaks further down in the logs (I am using NFS and cannot browse the data stores from the web interface, however note on my setup the storage vmk is on the same nic as the management vmk) – I am pretty sure this will stop VC from connecting an ESX. In any case I am trying to find the initial uplink binding solution – what I keep coming back to in Googling is a requirement for , however I ave been unable to get this to work using the qnap schema. I think I will try next to prove if the IOMMU is causing this by installing qemu on my laptop and see if I can successfully run ESX 6.7 or 7.0.x with vmxnet3 as a qemu VM and join my VC.
ChrisB
Great info. I was able to install ESXi 6.7 as a VM in the VC on my TS-451+. I had changed the NIC to vmxnet3 using “virsh edit”, in the section:
But when I changed the type in the section
I had trouble staying connected and would lose the connection after a short period of time so I left those types as e1000 and it seems to work better that way.
However, I’ve noticed when I restart the NAS, all the NICs in both sections revert back to e1000. Does anyone know a way to make that permanent?
jjh
Hi, Thank you for your article. I go through issue realted with CPU and NIC. Now I stuck in Storage page…. iESX asked where to install – local /remote – and on the list I have empty drives. how you managed with this ? I tried add SCSI / IDE / SATA but still on the list I have empy drives 🙁
I got this to work, however the virsh command doesnt make it permanent at all. How do I make it permanent on reboot? Also I used SATA and it found the disk fine.
Following this original guide, using virsh but instead changing to rather than vmxnet3 works and network is available. Enjoy :). Also works for ESXi 8