ESXi disks must be "considered local" for scratch to be created

A new KB was released yesterday (http://kb.vmware.com/kb/1033696), in which I noticed something interesting.

ESXi Installable creates a 4 GB Fat16 partition on the target device during installation if there is sufficient space, and if the device is considered Local.

This made me prick up my ears, as only a couple of weeks ago I was having problems using a kickstart script to deploy ESXi to some HP DL580 G7 servers.  This issue arose because the ESXi installer considered the local disk controller as non-local.

<aside>
To get around this kickstart issue, I had to add “remote” to the firstdisk option on the autopart line, so it ended-up looking like this:
autopart --firstdisk=local,remote --overwritevmfs
Basically, this tells the installer to tries the first local disk and if it can’t find one then in goes for the first remote disk.  Clearly this increases the chances of accidentally wiping a SAN LUN, but as the site had migrated to NFS only, I wasn’t too concerned.
</aside>

So I had a quick check of a few ESXi hosts that I had rolled out recently, and sure enough no scratch partition had been created.  This was unexpected behaviour as the hosts had indeed local spinning disks and had enough space (4GB free) for the scratch partition to be created during the install.  This means there will be no persistent scratch area – so the scratch will instead be created on a volitile ramdisk, which eats a bit of your host’s memory, and means the scratch contents don’t survive a reboot.  After further investigation I found this was also true on some DL380 G6 servers, but not on some DL380 G5 servers.  It seems this is something you want to go and check yourself on a case-by-case (RAID controller-by-RAID controller) basis.

To check if a host has a scratch partition, login via the TSM and run:
cat /etc/vmware/locker.config EDIT – see here for an update
If the file is blank, then no scratch is configured.

Here it is without a scratch partition:

And here it is with a scratch partition created by the installer:

To create a scratch partition for these servers on their local “non-local” disks then follow the steps in the KB.  You can do this after deployment via the vSphere client, vCLI, PowerCLI or the TSM.

Here is an outline of doing it at the TSM:
1.  Create a directory on the local VMFS volume
2.  Run vim-cmd hostsvc/advopt/update ScratchConfig.ConfiguredScratchLocation string /vmfs/volumes/DatastoreName/DirectoryName
3.  Reboot the host

The KB also details how to add this configuration to your kickstart files for future deployments or rebuilds.

3 thoughts on “ESXi disks must be "considered local" for scratch to be created

  1. Also to note that if the scratch partition has to go ramdisk, only a 512MB partition (and not 4GB) will be created. 512MB taken out of the server’s available physical memory.
    Cheers,
    Didier

  2. Newer HP Smart Array Drivers show as Remote because there is no differentiation in the driver between an internal only card (P410i, etc) and the cards that support external shared SAS arrays (P412 / P812 / P711m / etc)

    Thus Smart Array has become “remote” as far as ESXi is concerned.

Leave a Reply