VMware vSphere – nova 24.1.2.dev11 Documentation (2023)


OpenStack Compute supports the VMware vSphere family of products and provides access to advanced features such as vMotion, high availability and Dynamic Resource Scheduling (DRS).

This section describes how to configure VMware-based virtual machine images to boot. The VMware driver supports vCenter version 5.5.0 and later.

The VMware vCenter driver enables thenova-computeService for communicating with a VMware vCenter Server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to provide the compute scheduler with one large hypervisor unit for each cluster. Because individual ESX hosts are not exposed to the scheduler, Compute schedules at the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine finds its way into a vCenter cluster, it can use all vSphere features.

The following sections describe how to configure the VMware vCenter driver.

Architecture at a high level

The following diagram shows a high-level view of the VMware driver architecture:

VMware driver architecture

As shown in the figure, the OpenStack Compute Scheduler sees three hypervisors, each corresponding to a cluster in vCenter.nova-computecontains the VMware driver. You can run with severalnova-computeServices. It is recommended to run with onenova-computeservice per ESX cluster, ensuring that while compute schedules run at the granularity of thenova-computeThe service is actually capable of scheduling at the cluster level. Again the VMware driver insidenova-computeinteracts with the vCenter APIs to choose an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement.

The VMware vCenter driver also interacts with the Image Service to copy VMDK images from the Image Service's back-end storage. The dotted line in the figure represents VMDK images that are copied to the vSphere datastore by the OpenStack Image service. VMDK images are cached in the datastore, so the copying process is only required when the VMDK image is used for the first time.

After OpenStack starts a VM in a vSphere cluster, the VM becomes visible in vCenter and can access advanced vSphere features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it like any other OpenStack VM. You can perform advanced vSphere operations in vCenter while configuring OpenStack resources like VMs from the OpenStack dashboard.

The figure does not show how the networking fits into the architecture. See for detailsNetworking with VMware vSphere.

Configuration overview

To get started with the VMware vCenter driver, follow these general steps:

  1. Configure vCenter. SeeRequirements and Limitations.

  2. Configure the VMware vCenter driver in theneue.conffile.seeVMware vCenter driver.

  3. Load the desired VMDK images into the image service. SeeImages mit VMware vSphere.

  4. Configure the network service (Neutron). SeeNetworking with VMware vSphere.

Requirements and Limitations

Use the following list to prepare a vSphere environment running with the VMware vCenter driver:

Copy VMDK files

In vSphere 5.1, copying large image files (e.g. 12GB and larger) from the image service can take a long time. To improve performance, VMware recommends upgrading to VMware vCenter Server 5.1 Update 1 or later. For more information, seeRelease Notes.


Enable DRS and fully automated placement for each cluster containing multiple ESX hosts.

Shared camp

Only shared storage is supported and datastores must be shared among all hosts in a cluster. It is recommended that non-OpenStack dedicated datastores be removed from clusters that are configured for OpenStack.

clusters and data storage

Do not use OpenStack clusters and datastores for any other purpose. Otherwise, OpenStack will display incorrect usage information.


The network configuration depends on the desired network model. SeeNetworking with VMware vSphere.

security groups

Security groups are supported when using the VMware driver with OpenStack Networking and the NSX plugin.


The NSX plugin is the only plugin validated for vSphere.


The port range 5900-6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control.

You must change the ESXi firewall configuration to allow the VNC ports. Also, for the firewall changes to persist after a reboot, you need to create a custom vSphere installer package (VIB), which is then installed on the running ESXi host or added to a custom image profile used to install ESXi hosts . For details on creating a VIB to persist firewall configuration changes, seeknowledge base.


The VIB can be downloaded fromopenstack-vmwareapi-team/Tools.

To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required because the OpenStackBlock Storage VMDK driver does not currently work across multiple vCenter installations.

VMware vCenter service account

The OpenStack integration requires a vCenter service account with the following minimum privileges. Apply the permissions to thedata centerRoot object and select itPropagate to child objectsPossibility.

vCenter Permissions Tree

All privileges

data storage

allocate space

Search data storage

Low level file operation

delete file


Register extension


create folder




network configuration

Storage partition configuration


assign network


Assign virtual machine to resource pool

Migrate powered off virtual machine

Migrate powered on virtual machine

Virtual machine


Add existing hard drive

Add new disk

Add or remove device


CPU count

change resource

Disk change tracking

Host USB device


Change device settings

raw device

Remove disk


Set annotation

Paging file placement


Configure CD media

Switch off


Reset to default


To invent

Create from Existing

create new





Clone Virtual Machine

To adjust

Create a template from a virtual machine

Snapshot Management

Create Snapshot

Remove Snapshot

Profile controlled storage

Profile-driven storage view


Validate Session

View and end sessions




VMware vCenter driver

Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute to vCenter. This recommended configuration allows access through vCenter to advanced vSphere features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS).

VMwareVCDriver configuration options

Add the following VMware-specific configuration options to theneue.confFile:

[ORIGINALLY]compute_driver = vmwareapi.VMwareVCDriver[VMWare]host_ip = <vCenter hostname or IP address>host_username = <vCenter username>host_password = <vCenter password>Clustername = <vCenter-Clustername>datastore_regex = <optional data store regular expression>


  • Cluster: The vCenter driver can only support a single cluster. Clusters and datastores used by the vCenter driver should not contain VMs other than those created by the driver.

  • Data storage: Thedatastore_regexThe setting specifies the datastores to use with Compute. For example,datastore_regex="nas.*"selects all datastores whose name begins with "nas". If this line is omitted, Compute uses the first datastore returned by the vSphere API. It is recommended not to use this field and instead remove datastores that are not intended for OpenStack.

  • Host Reserved Memory: Derreserved_hoststorage_mbThe option value is 512 MB by default. However, VMware recommends setting this option to 0 MB because the vCenter driver reports the effective memory available to the virtual machines.

  • The vCenter driver generates instance names by instance ID. Instance name template is ignored.

  • The minimum supported vCenter version is 5.5.0. Starting with the OpenStackOcata version, any version below 5.5.0 is logged as a warning. In the OpenStack Pike version this is enforced.

Anova-computeThe service can control one or more clusters containing multiple ESXi hostsnova-computea critical service from a high availability perspective. Because the host is runningnova-computecan fail while vCenter and ESX are still running, you need to protect thatnova-computeService against host failures.


Manyneue.confOptions are relevant to libvirt but do not apply to this driver.

Images mit VMware vSphere

The vCenter driver supports VMDK format images. Media in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to use other formats, e.g. g. qcow2, to convert to VMDK format usingqemu-imgutility. After a VMDK disk is available, load it into the image service. Then you can use it with the VMware vCenter driver. The following sections provide additional details about the supported media and the commands used for conversion and upload.

Supported image types

Upload images in VMDK format to the OpenStack image service. The following VMDK disk types are supported:

  • VMFS Eben hard drives(includes thin, thick, zerodick, and eagerzeroeddick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, such as the OpenStack Image service, it becomes a reallocated flat disk. This affects the transfer time from the image service to the data store when the full preallocated flat disk needs to be transferred instead of the ThinDisk.

  • Monolithisch Sparse hard drives. Sparse disks are imported into ESXi by the image service as thin-provisioned disks. Sparse monolithic disks can be obtained from VMware Fusion or created by converting from other virtual disk formats usingqemu-imgutility.

  • Stream optimized hard drives. Stream-optimized disks are compressed sparsed disks. They can be obtained from VMware vCenter/ESXi if you export vm toovf/ova template.

The table below shows thevmware_disktypeProperty that applies to each of the supported VMDK disk types:

OpenStack Image service disk type settings

vmware_disktype property

VMDK disk type


Monolithically sparse


VMFS flach, Thin Provisioning

preassigned (default)

VMFS flach, dick/zeroedthick/eagerzeroedthick


Compresses sparse

Thatvmware_disktypeproperty is set when an image is loaded into the image service. For example, the following command creates a Monolithic Sparse imageby settingvmware_disktypetosparse:

$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_disktype="sparse" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-sparse < ubuntuLTS-sparse.vmdk


specificationslimoffers no advantage overpreassignedwith the current version of the driver. Future releases might restore the disk's thin properties after downloading it to a vSphere datastore.

The table below shows thevmware_ostypeProperty that applies to each of the supported guest operating systems:


If a look image has onevmware_ostypeproperty that does not match a valid VMware guest ID, VM creation fails and a warning is logged.

OpenStack Image Service OS type settings

vmware_ostype property

retail name


Asianux-Server 3 (64 Bit)


Asianux-Server 3


Asianux-Server 4 (64-Bit)


Asianux-Server 4


Darwin 64-Bit




Debian GNU/Linux 4 (64 Bit)


Debian GNU/Linux 4


Debian GNU/Linux 5 (64 Bit)


Debian GNU/Linux 5




FreeBSD x64






Novell NetWare 4


Novell NetWare 5.1


Novell NetWare 6.x


Novell Linux-Desktop 9


Open the Enterprise server


SCO OpenServer 5


SCO-OpenServer 6


openSUSE (64-Bit)






Linux 2.4x Kernel (64 Bit) (experimentell)


Linux 2.4x-Kernel


Linux 2.6x kernel (64-bit) (experimental)


Linux 2.6x kernel (experimental)


Other operating system


Other operating system (64 bit) (experimental)


Linux (64 Bit) (experimentell)


Other Linux


RedHat Linux 2.1


RedHat Enterprise Linux 2


Red Hat Enterprise Linux 3 (64-Bit)


RedHat Enterprise Linux 3


Red Hat Enterprise Linux 4 (64-Bit)


RedHat Enterprise Linux 4


Red Hat Enterprise Linux 5 (64 Bit) (experimentell)


RedHat Enterprise Linux 5


Red Hat Enterprise Linux 6 (64-Bit)


RedHat Enterprise Linux 6




SUSE Linux Enterprise Server 10 (64 Bit) (experimentell)


SUSE Linux Enterprise-Server 10


SUSE Linux Enterprise Server 11 (64-Bit)


SUSE Linux Enterprise-Server 11


SUSE Linux Enterprise Server 9 (64-Bit)


SUSE Linux Enterprise Server 9


Solaris 10 (64 Bit) (experimental)


Solaris 10 (32 Bit) (experimental)


Solaris 6


Solaris 7


Solaris 8


Solaris 9


SUSE-Linux (64-Bit)




Turbolinux (64-Bit)




Ubuntu-Linux (64-Bit)




SCO UnixWare 7


Windows 2000 Advanced-Server


Windows 2000 Professional


Windows 2000-Server


Windows 3.1


Windows 95


Windows 98


Windows 7 (64-Bit)


Windows 7


Windows Server 2008 R2 (64-Bit)


Windows Longhorn (64 Bit) (experimentell)


Windows Longhorn (experimentell)


Windows Millennium-Edition


Windows SmallBusiness-Server 2003


Windows Server 2003, Datacenter Edition (64 Bit) (experimentell)


Windows Server 2003, Datacenter-Edition


Windows Server 2003, Enterprise-Edition (64 Bit)


Windows Server 2003, Enterprise-Edition


Windows Server 2003, Standard Edition (64-Bit)


Windows Server 2003, Enterprise-Edition


Windows Server 2003, Standard Edition (64-Bit)


Windows Server 2003 Standard Edition


Windows Server 2003, Web-Edition




Windows Vista (64-Bit)


Windows Vista


Windows XP Home-Edition


Windows XP Professional Edition (64-Bit)


Windows XP Professional

Convert and load images

Use ofqemu-imgUtility can be used to convert disk images in various formats (e.g. qcow2) to VMDK format.

For example, the following command can be used to convert aqcow2 UbuntuTrusty-Cloud-Image:

$qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img\-O vmdk trusty-server-cloudimg-amd64-disk1.vmdk

VMDK disks converted byqemu-imgarealwayssparse VMDK monolithic disks with an IDE adapter type. Using the previous example of the Ubuntu Trustyimage after theqemu-imgConversion, the command to upload the VMDK disk should be something like this:

$Create Openstack image\--container-format leer --disk-format vmdk\--Propertyvmware_disktype="sparse" \--Propertyvmware_adaptertyp="id" \trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk

Note that thevmware_disktypeis set tosparseand thevmware_adaptertypis set toidein the previous command.

If the picture didn't come fromqemu-imgutility thatvmware_disktypeandvmware_adaptertypcould be different. To determine the type of image adapter from an image file, use the following command and search for theddb.adapterType=Line:

$head -20 <vmdk filename>

Assuming a preassigned disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:

$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_adaptertyp="lsiLogic" \--Propertyvmware_disktype="preallocated" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-thick-scsi < ubuntuLTS-flat.vmdk

Currently, VMDK disks cannot be connected to a virtual SCSI controller for booting the operating system with an IDE adapter type, and likewise disks with any of the SCSI adapter types (e.g. busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be connected to the IDE controller can be connected. As the previous examples show, it is therefore important tovmware_adaptertypproperty right. The default adapter type is lsiLogic, which is SCSI, so you can omit thatvmware_adaptertypproperty if you are sure that the image adapter type is lsiLogic.

Highlight VMware images

In a mixed hypervisor environment, OpenStack Compute uses thehypervisor_typtag to map images to the correct hypervisor type. For VMware images, set the hypervisor type toVMware. Other valid hypervisor types include:hyperv,ironic,lxc, andqemu.Note thatqemuused for both QEMU and KVM hypervisor types.

$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_adaptertyp="lsiLogic" \--Propertyvmware_disktype="preallocated" \--Propertyhypervisor_typ="vmware" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-thick-scsi < ubuntuLTS-flat.vmdk

Optimize images

Sparse monolithic disks download significantly faster, but have the overhead of an extra conversion step. When imported into ESX, sparse disks are converted to VMFS flat-thin provisioned disks. The download and conversion steps only affect the first instance launched that uses the sparse disk image. The converted disk image is cached, so subsequent instances using that disk image can simply use the cached version.

To avoid the conversion step (at the cost of longer download times), consider converting small-capacity disks to thin-provisioned or pre-allocated disks before loading them into the image service.

Use one of the following tools to pre-convert sparse disks.


Sometimes referred to as remote CLI or rCLI.

Assuming the sparse disk is exposed on a datastore accessible by an ESX host, the following command converts it to a preallocated format:

vmkfstools --server=ip_of_some_ESX_host -i \/vmfs/volumes/datastore1/sparse.vmdk \/vmfs/volumes/datastore1/converted.vmdk

Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if needed.

vmkfstoolsdirectly on the ESX host

When SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX datastore via scp and the ESX host's local vmkfstools can be used to perform the conversion. After logging in to the host via ssh, run this command:

vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk

vmware-vdiskmanageris a utility that ships with VMware Fusion and VMware Workstation. The following example converts a sparse disk to reallocated format:

„/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager“ -r sparse.vmdk -t 4 converted.vmdk

In the previous cases, the converted vmdk is actually a pair of files:

  • The descriptor fileconverted.vmdk.

  • The actual data file of the virtual diskconverted-flat.vmdk.

The file to be uploaded to the image service isconverted-flat.vmdk.

handling images

The ESX hypervisor needs a copy of the VMDK file to boot a virtual machine. Therefore, the vCenter OpenStack Compute driver needs to download the VMDK over HTTP from the image service to a datastore visible to the hypervisor. To streamline this process, a VMDK file is cached in the datastore the first time it is used. A cached image is stored in a folder named by the image ID. Subsequent virtual machines that require the VMDK use the cached version and do not need to copy the file again from the image service.

Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor directory on the shared datastore. To avoid this copy, boot the image in linked_clone mode. To enable this mode, seevmware.use_linked_clone.


You can also use theimg_linked_cloneproperty (or estatevmware_linked_clone) in the image service to override the linked_clonemode per image.

When you image a virtual machine from ISO with a VMDK disk, the image is created and attached to the virtual machine as a blank disk. In this caseimg_linked_cloneproperty for the image is simply ignored.

If multiple compute nodes are running on the same host or have a shared file system, you can allow them to use the same cache folder on the back-end datastore. To configure this action, set thecache_prefixoption in theneue.confFile. Its value represents the name prefix of the folder where the cached images are stored.


This can only take effect when compute nodes are running on the same host or have a shared file system.

You can automatically delete unused images after a certain period of time. To configure this action, set these options in the :oslo.config:group`image_cache` section in theneue.confFile:

  • image_cache.remove_unused_base_images

  • image_cache.remove_unused_original_minimum_age_seconds

Networking with VMware vSphere

The VMware driver supports networking with the Networking Service (Neutron). Depending on your installation, complete these configuration steps before deploying VMs:

  1. Before deploying VMs, create a port group with the same name as thevmware.integration_bridgevalue aneue.conf(Default isbr-int). All VM NICs are mapped to this port group for management by the OpenStack Network Plugin.

Volumes mit VMware vSphere

The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used to manage volumes based on vSphere datastores. For more information on the VMware VMDK driver, see the Cinder Guide on the VMDK Driver (TODO: This has not yet been imported and published). Also an aniSCSI volume driver has limited support and can only be used for attachments.


Operators can troubleshoot VMware-specific errors by correlating OpenStacklogs with vCenter logs. Each RPC call made by an OpenStack driver hasanopIDwhich can be traced in the vCenter logs. For example, consider the following excerpt from anova-computeProtocol:

Aug 15 07:31:09 localhost nova-compute[16683]: DEBUG oslo_vmware.service [-] Calling Folder.CreateVM_Task with opID=oslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1f {{(pid=16683 ) request_handler /opt/stack/oslo.vmware/oslo_vmware/service.py:355}}

In this case theopIDisoslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1fand we can get the vCenter log (usually/var/log/vmware/vpxd/vpxd.log) so it can determine if something went wrong with theCreate VMOperation.


Top Articles
Latest Posts
Article information

Author: Annamae Dooley

Last Updated: 06/29/2023

Views: 6361

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Annamae Dooley

Birthday: 2001-07-26

Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

Phone: +9316045904039

Job: Future Coordinator

Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.