Upgrade a Mac Pro 6,1 running VMWare ESXi 6.0 to 6.5. Here’s the rub: the machine has set on the shelf for the past 2 years, quietly running the hypervisor, not getting firmware updates, “collecting zero-days along the way” (as one Mr. Pepijn Bruienne succinctly put it).
ESXi 6.5 doesn’t install nicely on the 6,1 at the current time. William Lam of Virtually Ghetto has provided us with great information as usual, and easy to find links to VMWare documentation on how to work around the installation issues.
But – uh oh, VMWare’s documentation lists a minimum firmware version of B17 and I’m running B05. So, after wrinkling up my nose, trying anyway, and waiting… hoping… for a boot to complete that never would, I resigned to the fact that indeed VMWare must have listed this requirement for a reason.
So, this should be easy, right? Apple provides firmware updates bundled in the OS upgrades these days, so the worst-case scenario should look something like “install 10.12.3 on an external drive, and apply the combo updater to bring it up to 10.12.4”. That should kick off the firmware updater, and voila, we’ll be patched.
Without boring everyone with the details of how hard I tried to get this process to work, I finally – through trial and error – realized that the MacPro 6,1 appears to REQUIRE booting off the internal SSD in order to process firmware updates. Maybe this is a nuance of B05, maybe it’s a design *cough* feature of the trash-can, but it consistently ignored every staged EFI upgrade when it was booted from an external drive.
Since I run two of these Mac Pro Grouch-houses for our campus deployment system, I was able to develop the process on the first, and reproduce on the second.
The dance looked like this:
- Boot your ESXi configured MacPro “Can’t innovate any more my ass” model in target disk mode. Hook it up to a Thunderbolt equipped machine with more free space than the drive size on the MPro.
- Open the terminal, and type “diskutil list”. Look for the Mac Pro’s volumes – there will be a device with several volumes labeled “Untitled, Untitled2” etc. Make a note of the device number (it’ll be something like /dev/disk2).
- Unmount the disk by using the device number discovered above –
$ sudo diskutil unmountDisk /dev/disk2
- Back up your ESXi machine’s entire disk. I used “dd” for this –
$ sudo dd if=/dev/rdisk2 of=ESXiBackup.img bs=32768
(yes, I used ‘/dev/rdisk2’. It’s for performance reasons, but you’ll still have time to fetch a beverage while this runs)
- Wipe the internal drive on the MacPro and install an older version of 10.XX – this will compel an OS (and firmware) update. The older release may not be needed – Software Update may prompt to upgrade the firmware anyway, but I did not test that. Check your work your verifying the FW version in System Profiler.
- Now we can reverse step 4 and put the backup ESXi image back on the machine
$ sudo dd if=ESXiBackup.img of=/dev/rdisk2 bs=32768
Ok – the firmware is up to date at this point. Now we can follow the instructions on the VMWare documentation and upgrade the hypervisor OS, right?
Not so fast there, cowboy.
The very bad joke
You see, when we mounted Trashy McTrashFace in target disk mode, Spotlight instantly started indexing the drives and created a directory at the top of each of the 4 (in my case) volumes on the virtual machine host. This folder then interferes with the ESXi installer.
My guess is when you run the ESXi 6.5 installer and ask it to update the system (keeping the current config), the installer reaps the existing OS, checks to ensure the volume is in an expected state (empty), before it lays down the new OS. Since Spotlight created a folder called “.Spotlight-V100”, the ESXi installer finds a non-empty target and grinds to a halt as if it just found a nematode in its fish dinner.
When you reboot and attempt to re-run the installer, you no longer have the option to upgrade – but instead you can only install a fresh instance of the hypervisor OS. This destroys your existing configuration – accounts, network config, storage config, vm registrations (but not the actual vms). While not catastrophic, it is still a huge pain in the butt because you have to reconfigure everything, locate and re-register your vms, and potentially modify the storage devices on each vm since the UUID changed on any external storage mounts.
So, how do we solve this?
These Spotlight folders are benign to the existing ESXi installation – it isn’t bothered by them at all. This gives us the opportunity to boot the not-yet-upgraded hypervisor, delete these folders prior to upgrading, and save our configuration.
Booting ESXi, enabling the local shell and then dropping in to the console (Google it for instructions – it’s easy to find) gives us the ability to run a simple ‘find’ command to locate any of these pesky search index folders.
$ find / -name ‘.Spotlight*’
We can club them in turn with our good friend ‘rm -r’ (the ESXi version of find doesn’t appear to have a –delete option).
NOW you can follow the remainder of the instructions at VMWare, and the system will upgrade in place and your configuration will stick around.
And, you’ll find your way home much earlier. On a Saturday. With less cursing. Trust me.