Tag Archives: esxi

How to run nested ESXi 7 on QNAP’s Virtualization Station

**Important update at the end that should be read prior to wasting your time.

This weekend I found myself in need of an additional ESXi host so instead of acquiring new hardware I figured I might as well run it nested on my beefy QNAP TVS-h1288X with its Xeon CPU and 72GB of RAM. I already use the QEMU-based Virtualization Station (VS) for hosting my primary domain controller and it’s my go-to host for spinning up my ONTAP simulators so I figured nesting an ESXi VM shouldn’t be that difficult. What I hadn’t taken into account however is the fact that VMware has deprecated the VMKlinux Driver Stack, removing support for all of the NICs VS makes available to you in the GUI while provisioning new virtual machines. At first I researched injecting drivers or rolling my own installation ISO but these seemed overly complicated and somewhat outdated in their documentation. Instead I decided to get inside of VS and see if I could do something from that angle, it was after all simply their own version of QEMU.

I started the installation process, but it wasn’t long before I received this error message:

ESXi 7 No Network Adapters error message

I shut down the VM, and changed the NIC type over and over eventually exhausting the five possibilities presented in the VS GUI:

Not even the trusty old e1000 NIC, listed as Intel Gigabit Ethernet above worked…Over to the CLI I went. Some Googling around on the subject lead me to believe there was a command that would produce a list of supported virtualized devices, but the commands I was finding were for native KVM/QEMU installs and not intended for VS so I poked around and came across the qemu-system-x86_64 command, and when I ran it with the parameters -device help and it produced the following, abbreviated list:

./qemu-system-x86_64 -device help
[VL] This is a NROMAL VM
Controller/Bridge/Hub devices:
name "i82801b11-bridge", bus PCI
Network devices:
name "e1000", bus PCI, alias "e1000-82540em", desc "Intel Gigabit Ethernet"
name "e1000-82544gc", bus PCI, desc "Intel Gigabit Ethernet"
name "vmxnet3", bus PCI, desc "VMWare Paravirtualized Ethernet v3"

That last line is exactly what I was looking for, this lead me to believe that QEMU should be able to support the VMXNET3 network device so I cd’d over to the .qpkg/QKVM/usr/etc/libvirt/qemu directory and opened up the XML file associated with my ESXi VM and changed the following sections:

<interface type='bridge'>
      <mac address='00:50:56:af:30:fe'/>
      <source bridge='qvs0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>


<interface type='bridge'>
      <mac address='00:50:56:af:30:fe'/>
      <source bridge='qvs0'/>
      <model type='vmxnet3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

I saved the file and for good measure I also restarted VS. I booted the VM, and I received the same error message as above. This time I cc’d over to .qpkg/QKVM/var/run/libvirt/qemu and had a look at the XML file that represented the running config of the VM, and the NIC was still set to e1000. It took me a bit of hacking around to determine that in order to make this change persistent, I needed to edit the XML file using:

virsh edit 122d6cbc-b47c-4c18-b783-697397be149b

That last string of text being the UUID of the VM in question. If you’re unsure of what the UUID is of a given VM, simply grep “qvs:name” from all the XML files in the .qpkg/QKVM/usr/etc/libvirt/qemu directory. I made the same change as I had previously, exited the editor and booted the VM once again…This time, success! My ESXi 7.0u2 host booted fine and didn’t complain about the network. I went through the configuration and it is now up and running fine. The GUI still lists the NIC as Intel Gigabit Ethernet.

I’m reluctant to make any changes to the VM using the GUI at this time for fear of the NIC information changing, but I’m okay not using the GUI if it means being able to nest ESXi 7 on Virtualization Station for testing purposes.

**Update: While the ESXi 7.0u2 VM would boot find, I have been unable to actually add it to my vCenter server. I tried running the VM on my physical ESXi host and was able to add it to vCenter, then I powered down the ESXi VM and imported it into VS. The import worked, but then it showed as disconnected from vCenter. Next I tried importing vCenter into the virtualized ESXi host, but that won’t boot as VS isn’t presenting the VT-x flag even though I have CPU passthrough enabled. I’m still going to try and get this going, but won’t have time to devote to troubleshooting VS for a couple of days.

Simulate a two node cDOT 8.2 cluster on ESXi 5.1 in 17 easy steps.

Here’s a quick-how to I wrote a few months back when I ran into trouble trying to install the cDOT simulator in an ESXi environment. This was done on 5.1 but should work for 5.0 and 5.5 as well.

  1. Load the vmware multiextent module “/sbin/vmkload_mod multiextent” (Add this to /etc/rc.local.d/local.sh so it gets loaded on boot going forward. This used to be loaded by default but that changed in VMWare 4.1, more here.)
  2. Create a new vSwitch to use for the Cluster Network.
  3. Download the 8.2 cDOT VMDK here.
  4. Untar and ungzip the vsim_esx-cm.tgz and copy it to your datastore.
  5. Using vCentre, browse the directory on your datastore that contains the unarchived files above.
  6. Locate DataONTAP.vmx, right-click and choose “Add to Inventory.”
  7. Give it a name (cDOT 8.2 Cluster node 1), choose a host, click Finish. DO NOT POWER IT ON YET.
  8. Edit the properties of this newly created VM and make sure that the first two NICs (e0a and e0b) are on the cluster vSwitch.
  9. Power on the vm and open the console.
  10. Hit CTRL-C to enter the Boot Menu.
  11. Choose option 4, type “yes” to the two questions and wait for your disks to zero.
  12. Run through the cluster setup script, entering the licenses required (available here) when prompted. The only required one is the Cluster License itself, the rest can be added later.
  13. Repeat steps 4-8 from above, choosing a different name in step 7 (cDOT 8.2 Cluster node 2). You MUST repeat step 4, do NOT leverage cloning, it will NOT work.
  14. When you power up this VM, it is VERY important to not let it boot, so open up the console right away and hit any key other than Enter for the VLOADER prompt.
  15. Set and verify the SYS_SERIAL_NUM and bootarg.nvram.sysid as described on page 32, steps 10 and 11 in the Simulate ONTAP 8.2 Installation and Setup Guide.
  16. Type boot at the VLOADER prompt to boot the node.
  17. Repeat steps 10-13 from above, choosing to join an existing cluster and using the second set of licenses located in the text file linked to in step 12. ONTAP 8.2 introduced node-locked licensing so it is important to use the right keys.


You should now have a functioning, simulated two node cluster.