MemTrimRate for ESX VMs

A reader of vReference (hi Niall) got in touch a couple of months back to ask about the MemTrimRate setting in vmx files.  Unfortunately, I’ve been a bit busy of late and put this to one side.  Tonight I thought I’d do a bit of digging.

The vSphere Basic System Administration PDF for ESX4/EX4i/vCenter4 has a section on Monitoring and Troubleshooting Performance.  Under the Disk I/O section, it has several recommendations including the following (page 279 of the latest version):

9.  On systems with sizable RAM, disable memory trimming by adding the line MemTrimRate=0 to the virtual
machine’s .VMX file.

This advice is included in the current PDF guide for Update 1, although it was also present in pre-Update 1 documentation.

Now the MemTrimRate setting used to be used in VMware’s hosted (player/workstation/server) products.  I believe it was used to force the VM to use real RAM on the physical machine, despite what problems that might impose on the host.  My understanding is that it effectively stopped the VM from using the balloon driver to reclaim memory.  This was useful when you had surplus physical memory, but if limits were reached then it could cause big problems.  As we all know from an ESX world, during contention the balloon driver is much better option than the host forcing the guest to swap to disk.

What good would it do?

So, by making 1 (or more) VMs on a host not use the balloon driver it avoids the constant adjustment and readjustment, and prevents some disk I/O from occurring.  All good as long as there is plenty of RAM for the host and all the VMs.  If not, the VMs with MemTrimRate=0 are going to be forced to swap by the host which has a very detrimental performance hit.

So would you do this?

Well if you can guarantee that the total amount of RAM allocated to all the VMs on a host plus the Service Console memory is less than the physical RAM, then I guess this is a valid configuration. I suppose this is what they mean by “on systems with sizable RAM”.

If you absolutely must eek out every last bit of horsepower from that ESX host, and that particular application relies sooo heavily on disk I/O.

However, I don’t honestly know many environment that categorically don’t change over time.  That don’t grow and face some sort of creep.  More VMs than the initial design. More resources added to existing VMs.

Additionally, most ESX hosts don’t operate alone.  They’re normally part of dynamic clusters that change over time, allocate resources as needed. The mere definition of a VM’s host is usually a fairly nebulous variable.

As some experts point out, ESX performance issues are often caused by mis-configurations with the storage arrays.  There are normally much more rewarding disk optimizations that can be made (spindle count anyone?).  I can’t think that I would ever resort to these measures.

I don’t care, I’m going to do it anyway

If you absolutely have to do this, I would recommend you set a memory reservation on each customized VM for the full amount of its RAM allocation.  This would at least prevent additional VMs being powered on and starving the host.  (I would also ensure that host memory alarms were configured with appropriate alerts).

Really, I mean really!

So is the MemTrimRate now an applicable setting for vSphere 4?  Was it always available? It wouldn’t be the first time that I’ve found spurious documentation relating to other VMware products slipping into the ESX PDFs by mistake, as features find their way being promoted up to the enterprise product.  Perhaps its just an oversight, and ESX4 will just ignore the MemTrimRate setting.

I’ll let you know what results I find in the test lab.  Please leave a comment if you have some insight here and we’ll get to the bottom of it.

One thought on “MemTrimRate for ESX VMs

  1. v nice review! I had an experience to use custom vmx parameters to disable page sharing and ballooning for VMs on esx 3.x but don’t think MemTrimRate was part of those strings. It were mainly for citrix Terminal SVR VMs that weren’t performing well in shared production (even though esx didn’t show overall performance impact on other non-Citrix VMs) as opposed to the lab i.e. 100% reserved RAM and full CPU limit or one VM per one ESX (quad CPU dcore, 64GB RAM).
    /Mo

Leave a Reply