VMware API error: “Customization of the guest operating system ‘rhel6_64Guest’ is not supported in this configuration”

While using VMware’s API and pyVmomi to clone templates into a VM, a couple of times I have run into this error:

Cannot deploy template: Customization of the guest operating system ‘rhel6_64Guest’ is not supported in this configuration. Microsoft Vista (TM) and Linux guests with Logical Volume Manager are supported only for recent ESX host and VMware Tools versions. Refer to vCenter documentation for supported configurations.

For me this happened when (A) I was cloning a RHEL 64-bit template (this occurred both with RHEL 6 and RHEL 7 but for entirely different reasons on each), and (B) I was using a customization specification. We had just stood up a new datacenter and I was laying the groundwork for creating masses of virtual machines from a command-line. A Linux command-line, mind you. Not from PowerCLI or PowerShell (which are certainly powerful, but hey, I’m a Linux guy).

In my case, the error had nothing to do with logical volumes (I suspect that’s true for most people who have run across this message). I have a script that has been happily cloning 64-bit LVM-based RHEL 6 templates in multiple vCenters that are on versions between 5.0 and 5.5. Since my customization spec was setting network parameters, I suspected that’s where the root of this problem was. So, before you go chasing that wild goose of LVM incompatibility, or telling vCenter your template is RHEL 5, I have a couple of recommendations.

In most of these cases, you’ll need to convert your template to a VM and power it up, then make the changes, shut it down, and convert it back to a template.

1. If this is happening with RHEL 7, make sure the vCenter is a new enough version to support it

One of our vCenters had not been updated in some time.  When I tried cloning the same RHEL 7 template from an updated vCenter (and had the correct VMware tools installed) all went well.

2. If this is happening with RHEL 6, Make sure your version is  6.4 or later

There were some updates in 6.4 that greatly improved RHEL’s interaction with VMware’s hypervisor, including at the NIC level.

3. Install VMware tools on the template VM

This is the biggest source of problems.  If you’re using  customization specifications during cloning, VMware cannot do the work unless the tools are installed in the image from which you’re cloning.  I like using the “3rd party” tools that you can get from VMware’s own repo. You can clone that repo into your own, or just use theirs. You can even bring it into Satellite or Spacewalk, either as an external repo or a software channel. Or maybe you like doing it the old fashioned way and installing them from vCenter. Bottom line – install those tools!

For RHEL 6 that means “vmware-tools-plugins-deployPkg” has to be present.  It will be if you simply install “vmware-tools-esx-nox” (I also include vmware-tools-esx-kmods).

For RHEL 7, you need open-vm-tools (which comes standard with RHEL 7) and open-vm-tools-deployPkg (which comes from VMware).  Note: If you’re using CentOS 7, open-vm-tools-deployPkg requires open-vm-tools < 9.5.  This created issues for me trying to install in CentOS 7.2, which ships only 9.10.  The solution?  Disable all repos except vmware-tools and C7.1.1503-base from the “vault” when you do the installation.
yum –disablerepo=* –enablerepo=vmware-tools –enablerepo=C7.1.1503-base install open-vm-tools-deployPkg

By the way, the symptoms of this in RHEL 7?  Aside from the customization error, you’ll have network cards named ens32 or ens33 in the template, but ens160 in the clone.  And obviously, networking will be broken.

Also, get rid of the udev stuff that sometimes messes up your NIC settings

This applies more to RHEL 6.  If you clone from a template, VMware will naturally give the new machine a new MAC address for any NICs in the template. In a single-NIC configuration, when udev sees this, it says, “Hey, I this isn’t the MAC for eth0, so let me make a configuration for eth1 using this new MAC and UUID.” You can help yourself here by doing the following (per VMware’s advice):

-remove /etc/udev/rules.d/70-persistent-net.rules

-remove /lib/udev/rules.d/75-persistent-net-generator.rules

You might not need to remove /lib/udev/rules.d/75-persistent-net-generator.rules, but I don’t think it’ll hurt. When I did these things to my RHEL 6 template, I was able to clone just fine using the API again.

Working with VMware’s API via Python

(Update – Sept. 2, 2015)
In some of the community samples, someone wrote a nice function to get different objects.

As part of streamlining VM creation, I began working with VMware’s API.  The first thing I learned is that there are multiple API implementations.  Until recently, while VMware had their own Python API, they kept it private and did not officially support it.  That prompted folks to write their own bindings and share them via Google Code in a project called pySphere (https://code.google.com/p/pysphere/).  In early 2014, however, VMware did release their bindings to the public (https://github.com/vmware/pyvmomi).

Wanting to go the official route, I downloaded pyVmomi and got to work.  From the beginning I noticed that this API is very thorough and robust.  The downside of that is that it is also very complex and can sometimes be difficult to know just how to get and do what you need.  Fortunately there are many good samples included in the code.  So, while it takes a lot of effort to get things just right, once you get the objects and methods sorted out, it works well.  And fortunately the documentation (vmware python API doc), if you can stay awake while you wade through it, will tell you what you need.

I won’t go into how I did each and every thing.  The goal of this document is to give the reader a decent understanding of how to create and manipulate objects, as well as how to make the method calls.

What to import

The basic script should have the following items.

  • from pyVmomi import vim – this module contains what we need to create objects and make calls
  • from tools import tasks – if we’re going to create anything in vCenter, this module allows us to designate the creation process as a task, then wait for it to finish
  • from pyVim import connect – needed to login
  • import atexit – this module allows us to specify that, when we logout, certain things should happen (such as logging out)
  • import argparse – if we’re going to accept command-line input, this module greatly simplifies managing that input

Building objects

To get information (such as a list of VMs) or do things (such as create a VM), the system requires a session to be established and objects to be passed to the method calls.  Establishing a session is as simple as logging in, which is covered in the sample code included in the project, so I won’t cover that here.  The samples refer to the session as a service instance, often shortened as “si”.  For this document, you  will notice the service-instance object referred to as “si”.

A few basic objects

Once we’re logged in we need a few fundamental objects from which we can build others.  For my scripts, I use these:

  • atexit.register(connect.Disconnect, si)
  • content = si.RetrieveContent()
  • datacenter = content.rootFolder.childEntity[0]
  • hosts = datacenter.hostFolder.childEntity

These are mostly self-explanatory. The second item, “content,” is essentially “all the things” in this vCenter.  From those we get access to pretty much any object that the API allows us to reference.

The fun begins

The challenge began when I wanted to create a new VM by cloning a template.  While the API is, as I said, very thorough and complex, the documentation is equally so.  But it’s also confusing at times.  I’ll demonstrate using parts of the cloning process.

While there is a method called CloneVM_Task, that is the wrong call if you want your VM placed in a certain datastore cluster but do not want to specify which store it lands it.  CloneVM_Task forces you to specify the specific datastore for your new VM.  The proper way to do that is with a combination of RecommendDatastores and ApplyStorageDrsRecommendation_Task.

This method requires one object (called a parameter in the doc).  While the documentation page lists two parameters (_this and storageSpec), the first of those is really a parent object against which we run the method.  I’ll elaborate on that in a second.  But first let me point out that any parameter named “_this” in the documentation means you call the method in question by appending it to an object to which “_this” refers.  Demonstration:

RecommendDatastores lists two parameters: _this and storageSpec.  We actually call it like so:
storage_rec = content.storageResourceManager.RecommendDatastores(storageSpec=storagespec)
So, “_this” gets translated into ” content.storageResourceManager” and we call the method using storageSpec, which is an object we have to have already built.

Now, how the heck would you have known that?  Well, you wouldn’t intrinsically.  There are three ways to find that out: either you figure it out from the included samples, you find it from samples in the discussion lists for pysphere (because pysphere is pretty close to pyVmomi), or you catch a clue from the documentation page.  In this case, the documentation URL for this method ends with this text:
Additionally, in the left pane of the documentation the method lists its parent namespace in parentheses, i.e. “RecommendDatastores (in StorageResourceManager)”

But you might not have guessed it is called under the “content” object without further digging or experimentation.  Plainly put, it just takes some intuition and hacking sometimes to get this right.

And that’s just the tip of the iceberg.  You could be forgiven for thinking, “Oh, this only has one object.  No worries.”  Why?  Because “storageSpec” contains a lot of other objects.  And many of those contain other objects, etc.  And many of those objects must first be instantiated (a decidedly object-oriented programming term) by a method before they can be used.

Ready to quit yet?  Don’t.  There’s a good payoff to sticking with this stuff.  At the least it will stretch your mind and deepen your understanding of how VMware works under the hood.  And what bit-head doesn’t want that?

Looking at storageSpec, it wants an object of type “StoragePlacementSpec“.  As we drill into that, we see this object consists of several other objects.  And, as I said, some of them have many more objects in their definition.  Let’s pick a somewhat simple one: podSelectionSpec.  This particular data type (StorageDrsPodSelectionSpec) tells VMware which datastore cluster we want to use for the new VM.  Of the two objects that this data type uses, we are only interested in the second one, storagePod.

Unfortunately, because these are custom data types, we cannot simply say
storagePod = “MyStoragePod” and expect the API to intuit what we want.  In place of “MyStoragePod” we need a data type of StoragePod which points to the pod named “MyStoragePod”.  In our case, from the samples I took a bit of code which created a view of the storage pods, then I parsed through that list and compared each pod name to our target (“MyStoragePod”) and, when we got a match, assigned that to a variable named “pod”.  So, below, first we create a view, then we destroy the view so we do not consume too many resources in the vHost.   And finally we parse through the view until we find a match.

 obj_view = content.viewManager.CreateContainerView(content.rootFolder,[vim.StoragePod],True)
 ds_cluster_list = obj_view.view
 for ds in ds_cluster_list:
     if ds.name == "MyStoragePod":
         pod = ds
 # a blank object of type PodSelectionSpec
 podsel = vim.storageDrs.PodSelectionSpec()
 podsel.storagePod = pod

And thus we have our pod selection.  To add that to our primary object, a StoragePlacementSpec data type, I create the blank object, then add to it:

# a blank type of StoragePlacementSpec
 storagespec = vim.storageDrs.StoragePlacementSpec()
 storagespec.podSelectionSpec = podsel

So, in short, we first create a blank parent object (similar to initializing a variable in other languages), then we assign the child object to be a part of the parent object.  Again, the tricky part is finding a way to identify those child objects.

Update – Sept. 2, 2014 –
def get_obj(content, vimtype, name):
    Return an object by name, if name is None the
    first found object is returned
    obj = None
    container = content.viewManager.CreateContainerView(
        content.rootFolder, vimtype, True)
    for c in container.view:
        if name:
            if c.name == name:
                obj = c
            obj = c

    return obj

Now all you have to do is call that function with the proper “vimtype”.  A few that are useful:

# gets a  datastore cluster (storage pod) matching "DS-Cluster1"
cluster = get_obj(content, [vim.StoragePod], "DS-Cluster1")
# gets a host cluster matching "My-Host-Cluster"
cluster = get_obj(content, [vim.ClusterComputeResource], "My-Host-Cluster")
# gets a  resource pool matching "My-Resource-Pool"
cluster = get_obj(content, [vim.ResourcePool], "My-Resource-Pool")
# gets a  datastore matching "Datastore1"
cluster = get_obj(content, [vim.Datastore], "Datastore1")

Doing stuff to the objects

Once we have built our objects, we can create the VM.  Or, in our case, clone a template into a VM.  Before I discuss the methods, here are the steps I used to create the VM object.

 template = getTemplate(si, tierdata['template'])
 if template == None:
    print "Unable to find template matching %s" % template
# make an empty clone spec object, then customize it
 print("Checkpoint - getting customization data")
 cloneSpec = vim.vm.CloneSpec()
 custSpec = content.customizationSpecManager.GetCustomizationSpec(tierdata['customization_spec']) # dev_gold
 custSpec.spec.nicSettingMap[0].adapter.ip.ipAddress = VMip
 custSpec.spec.nicSettingMap[0].adapter.gateway = gateway
 cloneSpec.customization = custSpec.spec
 # make an empty config spec, then customize it
 config = vim.vm.ConfigSpec()
 config.name = machine_name
 cloneSpec.config = config
 location = vim.vm.RelocateSpec()
 cloneSpec.location = location
 # if we cannot find resource pool with name given by user, use default for this vHost
 print("Checkpoint - getting resource pool data")
 resourcePool = getResourcePool(hosts,tierdata['resource_pool']) # "MyStoragePod"
 if resourcePool == None:
    print("Unable to find resource pool named %s" % tierdata['resource_pool']) # MyStoragePod
 cloneSpec.location.pool = resourcePool
 cloneSpec.powerOn = powerOn
 # using name of target cluster (i.e. MyStoragePod), parse thru clusters until we find a match
 print("Checkpoint - getting datastore cluster data")
 obj_view = content.viewManager.CreateContainerView(content.rootFolder,[vim.StoragePod],True)
 ds_cluster_list = obj_view.view
 for ds in ds_cluster_list:
    if ds.name == tierdata['datastore']: # MyStoragePod
        pod = ds
 # set thin provisioning on disks
 device = vim.vm.device.VirtualDevice()
 vdisk = vim.vm.device.VirtualDisk()
 backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
 vdisk.backing = backing
 vdisk.backing.thinProvisioned = True
 vdisk.key = -100
 dspec = vim.vm.device.VirtualDeviceSpec()
 dspec.device = vdisk
 config.deviceChange = [dspec]
 # make an empty pod selection spec, then customize it 
 podsel = vim.storageDrs.PodSelectionSpec()
 podsel.storagePod = pod
 # make an empty storage placement spec, then customize it 
 storagespec = vim.storageDrs.StoragePlacementSpec()
 storagespec.podSelectionSpec = podsel
 storagespec.vm = template
 storagespec.type = 'clone'
 print("Checkpoint - getting destination folder data")
 storagespec.folder = getDestFolder(datacenter, "Development", "Applications")
 storagespec.cloneSpec = cloneSpec
 storagespec.cloneName = machine_name
 storagespec.configSpec = config
'''Once that is done, we have built our VM object.  The method we use to
 create the VM is "RecommendDatastores", which we call like so:'''
rec = content.storageResourceManager.RecommendDatastores(storageSpec=storagespec)
'''This asks vCenter to tell us which datastore within the cluster it prefers 
to use for this host.  That method returns an object, which I chose to 
call "rec".  One of the recommendation objects is a key, which we will 
use to actually apply the recommendation.  Interesting note: when you 
create a VM from a template within the GUI, you see a similar process.  
You select a datastore and the GUI returns recommendations.
To apply the recommendation, we use another set of objects VMware provided known as tasks.
Using that framework, we are able to tell the script to wait until the task finishes before continuing.'''
rec_key = rec.recommendations[0].key
task = content.storageResourceManager.ApplyStorageDrsRecommendation_Task(rec_key)
tasks.wait_for_tasks(si, [task])


(Note that VMware will proceed with the task whether we execute the last step or not.  It is wise and courteous on our part to let it finish each VM creation before proceeding to the next.)

At this point, in the GUI you will see the clone process taking place.  It typically finishes within 2-3 minutes.  If there are any errors, you will see those in the GUI as well.  You can copy the text of them for research.  Additionally, the Python code will throw an exception which usually matches the GUI message fairly well.