In my last post, I looked at how the ESXi installer may not create a scratch partition if it identifies the local disks as remote during the install. I had stated that the following was good check to see if you had a scratch partition setup:
However after a bit more testing down the rabbit hole, it appears this isn’t a good definitive test. Before I explain why, to check that the ESXi host is using a persistent scratch “location” run this instead:
vim-cmd hostsvc/advopt/view ScratchConfig.CurrentScratchLocation
If the value is null, i.e. value = “”, then no persistent scratch location is set in the running configuration. Changing the
ScratchConfig.ConfiguredScratchLocation value will load this after the next reboot (as per the instruction in my last post)
The reason the
locker.config file isn’t a definitive test is that ESXi can set the Configured value in several ways. If you use the vim-cmd method it creates an entry in the
locker.conf file (and creates the file it if it doesn’t already exist). However if this file doesn’t exist, then ESXi goes on to check the following (from the KB):
2. A Fat16 filesystem of at least 4 GB on the Local Boot device.
3. A Fat16 filesystem of at least 4 GB on a Local device.
4. A VMFS Datastore on a Local device, in a .locker/ directory.
5. A ramdisk at /tmp/scratch/
I have found hosts where there is no
locker.config file, but because a 4GB FAT partition had been created during the initial install, it uses that. In these cases there is no
.locker directory, but everything sits directly in the partition and it is mounted under
/vmfs/volumes/ as to be accessible by the vmkernel. Interestingly, in this configuration there is no sym linked datastore, so you won’t see this volume in the vSphere client.
For hosts where the 4GB FAT partition doesn’t exist, but a local VMFS datastore is present, you can find that a
.locker folder is created. You can see these from the vSphere client datastore browser. But remember that if you are a POSIX style console (like the vMA or ESXi shell), then as this folder is prepended with a period (“full stop” in real English :)), the folder will be hidden.
Various changes have occurred with regard to the scratch location during the 4.x cycle. I guess this is why ESXi has to check all these locations for possible dump sites. Also, when using vendor specific images, it could depend how they patch their master images before releasing them. So it’s very difficult to understand which versions are set in which ways.
The interesting thing, is that the existence of scratch specific “partition” does not categorically determine the persistence of scratch. ESXi can use a scratch folder and it will still be persistent across reboots. Only the 5th option above forces it into a volatile ramdisk. So the correct terminology is “persistent scratch location”. I for one welcome our new persistent scratch location nomenclature overlords…
Remember though, the moral of the first post is still valid. Some servers’ local disks are treated as non-local and therefore aren’t configured with a “persistent scratch location” at all (even though there is a local VMFS volume available). This inconsistency is something you want to check if you don’t want surprises.
One thought on “Check for ESXi scratch persistence”