introduction¶
OpenStack Compute supports the VMware vSphere family of products and provides access to advanced features such as vMotion, high availability and Dynamic Resource Scheduling (DRS).
This section describes how to configure VMware-based virtual machine images to boot. The VMware driver supports vCenter version 5.5.0 and later.
The VMware vCenter driver enables thenova-compute
Service for communicating with a VMware vCenter Server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to provide the compute scheduler with one large hypervisor unit for each cluster. Because individual ESX hosts are not exposed to the scheduler, Compute schedules at the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine finds its way into a vCenter cluster, it can use all vSphere features.
The following sections describe how to configure the VMware vCenter driver.
Architecture at a high level¶
The following diagram shows a high-level view of the VMware driver architecture:
VMware driver architecture
As shown in the figure, the OpenStack Compute Scheduler sees three hypervisors, each corresponding to a cluster in vCenter.nova-compute
contains the VMware driver. You can run with severalnova-compute
Services. It is recommended to run with onenova-compute
service per ESX cluster, ensuring that while compute schedules run at the granularity of thenova-compute
The service is actually capable of scheduling at the cluster level. Again the VMware driver insidenova-compute
interacts with the vCenter APIs to choose an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement.
The VMware vCenter driver also interacts with the Image Service to copy VMDK images from the Image Service's back-end storage. The dotted line in the figure represents VMDK images that are copied to the vSphere datastore by the OpenStack Image service. VMDK images are cached in the datastore, so the copying process is only required when the VMDK image is used for the first time.
After OpenStack starts a VM in a vSphere cluster, the VM becomes visible in vCenter and can access advanced vSphere features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it like any other OpenStack VM. You can perform advanced vSphere operations in vCenter while configuring OpenStack resources like VMs from the OpenStack dashboard.
The figure does not show how the networking fits into the architecture. See for detailsNetworking with VMware vSphere.
Configuration overview¶
To get started with the VMware vCenter driver, follow these general steps:
Configure vCenter. SeeRequirements and Limitations.
Configure the VMware vCenter driver in the
neue.conf
file.seeVMware vCenter driver.Load the desired VMDK images into the image service. SeeImages mit VMware vSphere.
Configure the network service (Neutron). SeeNetworking with VMware vSphere.
Requirements and Limitations¶
Use the following list to prepare a vSphere environment running with the VMware vCenter driver:
- Copy VMDK files
In vSphere 5.1, copying large image files (e.g. 12GB and larger) from the image service can take a long time. To improve performance, VMware recommends upgrading to VMware vCenter Server 5.1 Update 1 or later. For more information, seeRelease Notes.
- DRS
Enable DRS and fully automated placement for each cluster containing multiple ESX hosts.
- Shared camp
Only shared storage is supported and datastores must be shared among all hosts in a cluster. It is recommended that non-OpenStack dedicated datastores be removed from clusters that are configured for OpenStack.
- clusters and data storage
Do not use OpenStack clusters and datastores for any other purpose. Otherwise, OpenStack will display incorrect usage information.
- Networking
The network configuration depends on the desired network model. SeeNetworking with VMware vSphere.
- security groups
Security groups are supported when using the VMware driver with OpenStack Networking and the NSX plugin.
note
The NSX plugin is the only plugin validated for vSphere.
- VNC
The port range 5900-6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control.
note
In addition to the standard VNC port numbers (5900 to 6000) given in the above document, the following ports are also used: 6101, 6102 and 6105.
You must change the ESXi firewall configuration to allow the VNC ports. Also, for the firewall changes to persist after a reboot, you need to create a custom vSphere installer package (VIB), which is then installed on the running ESXi host or added to a custom image profile used to install ESXi hosts . For details on creating a VIB to persist firewall configuration changes, seeknowledge base.
note
The VIB can be downloaded fromopenstack-vmwareapi-team/Tools.
To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required because the OpenStackBlock Storage VMDK driver does not currently work across multiple vCenter installations.
VMware vCenter service account¶
The OpenStack integration requires a vCenter service account with the following minimum privileges. Apply the permissions to thedata center
Root object and select itPropagate to child objectsPossibility.
All privileges | |||
---|---|---|---|
data storage | |||
allocate space | |||
Search data storage | |||
Low level file operation | |||
delete file | |||
renewal | |||
Register extension | |||
Maps | |||
create folder | |||
host | |||
Construction | |||
maintenance | |||
network configuration | |||
Storage partition configuration | |||
network | |||
assign network | |||
Resource | |||
Assign virtual machine to resource pool | |||
Migrate powered off virtual machine | |||
Migrate powered on virtual machine | |||
Virtual machine | |||
Construction | |||
Add existing hard drive | |||
Add new disk | |||
Add or remove device | |||
Progressive | |||
CPU count | |||
change resource | |||
Disk change tracking | |||
Host USB device | |||
memory | |||
Change device settings | |||
raw device | |||
Remove disk | |||
Rename | |||
Set annotation | |||
Paging file placement | |||
interaction | |||
Configure CD media | |||
Switch off | |||
A | |||
Reset to default | |||
expose | |||
To invent | |||
Create from Existing | |||
create new | |||
Move | |||
Remove | |||
unregister | |||
deployment | |||
Clone Virtual Machine | |||
To adjust | |||
Create a template from a virtual machine | |||
Snapshot Management | |||
Create Snapshot | |||
Remove Snapshot | |||
Profile controlled storage | |||
Profile-driven storage view | |||
sessions | |||
Validate Session | |||
View and end sessions | |||
vApp | |||
Export | |||
Import |
VMware vCenter driver¶
Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute to vCenter. This recommended configuration allows access through vCenter to advanced vSphere features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS).
VMwareVCDriver configuration options¶
Add the following VMware-specific configuration options to theneue.conf
File:
[ORIGINALLY]compute_driver = vmwareapi.VMwareVCDriver[VMWare]host_ip = <vCenter hostname or IP address>host_username = <vCenter username>host_password = <vCenter password>Clustername = <vCenter-Clustername>datastore_regex = <optional data store regular expression>
note
Cluster: The vCenter driver can only support a single cluster. Clusters and datastores used by the vCenter driver should not contain VMs other than those created by the driver.
Data storage: The
datastore_regex
The setting specifies the datastores to use with Compute. For example,datastore_regex="nas.*"
selects all datastores whose name begins with "nas". If this line is omitted, Compute uses the first datastore returned by the vSphere API. It is recommended not to use this field and instead remove datastores that are not intended for OpenStack.Host Reserved Memory: Der
reserved_hoststorage_mb
The option value is 512 MB by default. However, VMware recommends setting this option to 0 MB because the vCenter driver reports the effective memory available to the virtual machines.The vCenter driver generates instance names by instance ID. Instance name template is ignored.
The minimum supported vCenter version is 5.5.0. Starting with the OpenStackOcata version, any version below 5.5.0 is logged as a warning. In the OpenStack Pike version this is enforced.
Anova-compute
The service can control one or more clusters containing multiple ESXi hostsnova-compute
a critical service from a high availability perspective. Because the host is runningnova-compute
can fail while vCenter and ESX are still running, you need to protect thatnova-compute
Service against host failures.
note
Manyneue.conf
Options are relevant to libvirt but do not apply to this driver.
Images mit VMware vSphere¶
The vCenter driver supports VMDK format images. Media in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to use other formats, e.g. g. qcow2, to convert to VMDK format usingqemu-img
utility. After a VMDK disk is available, load it into the image service. Then you can use it with the VMware vCenter driver. The following sections provide additional details about the supported media and the commands used for conversion and upload.
Supported image types¶
Upload images in VMDK format to the OpenStack image service. The following VMDK disk types are supported:
VMFS Eben hard drives
(includes thin, thick, zerodick, and eagerzeroeddick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, such as the OpenStack Image service, it becomes a reallocated flat disk. This affects the transfer time from the image service to the data store when the full preallocated flat disk needs to be transferred instead of the ThinDisk.Monolithisch Sparse hard drives
. Sparse disks are imported into ESXi by the image service as thin-provisioned disks. Sparse monolithic disks can be obtained from VMware Fusion or created by converting from other virtual disk formats usingqemu-img
utility.Stream optimized hard drives
. Stream-optimized disks are compressed sparsed disks. They can be obtained from VMware vCenter/ESXi if you export vm toovf/ova template.
The table below shows thevmware_disktype
Property that applies to each of the supported VMDK disk types:
vmware_disktype property | VMDK disk type |
---|---|
sparse | Monolithically sparse |
slim | VMFS flach, Thin Provisioning |
preassigned (default) | VMFS flach, dick/zeroedthick/eagerzeroedthick |
streamOptimized | Compresses sparse |
Thatvmware_disktype
property is set when an image is loaded into the image service. For example, the following command creates a Monolithic Sparse imageby settingvmware_disktype
tosparse
:
$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_disktype="sparse" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-sparse < ubuntuLTS-sparse.vmdk
note
specificationslim
offers no advantage overpreassigned
with the current version of the driver. Future releases might restore the disk's thin properties after downloading it to a vSphere datastore.
The table below shows thevmware_ostype
Property that applies to each of the supported guest operating systems:
note
If a look image has onevmware_ostype
property that does not match a valid VMware guest ID, VM creation fails and a warning is logged.
vmware_ostype property | retail name |
---|---|
asianux3_64Gast | Asianux-Server 3 (64 Bit) |
asianux3Gast | Asianux-Server 3 |
asianux4_64Gast | Asianux-Server 4 (64-Bit) |
asianux4Guest | Asianux-Server 4 |
darwin64Gast | Darwin 64-Bit |
darwinGast | Darwin |
debian4_64Gast | Debian GNU/Linux 4 (64 Bit) |
debian4Guest | Debian GNU/Linux 4 |
debian5_64Gast | Debian GNU/Linux 5 (64 Bit) |
debian5Guest | Debian GNU/Linux 5 |
dosguest | MS-DOS |
freebsd64Guest | FreeBSD x64 |
freebsdGuest | FreeBSD |
mandrivaGast | Mandriva-Linux |
netware4Guest | Novell NetWare 4 |
netware5Gast | Novell NetWare 5.1 |
netware6Gast | Novell NetWare 6.x |
eng9Guest | Novell Linux-Desktop 9 |
oesGast | Open the Enterprise server |
openServer5Guest | SCO OpenServer 5 |
openServer6Guest | SCO-OpenServer 6 |
opensuse64Gast | openSUSE (64-Bit) |
opensuseGuest | openSUSE |
os2Gast | OS/2 |
other24xLinux64Guest | Linux 2.4x Kernel (64 Bit) (experimentell) |
other24xLinuxGuest | Linux 2.4x-Kernel |
andere26xLinux64Gast | Linux 2.6x kernel (64-bit) (experimental) |
other26xLinuxGuest | Linux 2.6x kernel (experimental) |
andererGast | Other operating system |
andererGast64 | Other operating system (64 bit) (experimental) |
otherLinux64Guest | Linux (64 Bit) (experimentell) |
otherLinuxGuest | Other Linux |
redhatGast | RedHat Linux 2.1 |
rhel2Gast | RedHat Enterprise Linux 2 |
rhel3_64Gast | Red Hat Enterprise Linux 3 (64-Bit) |
rhel3Gast | RedHat Enterprise Linux 3 |
rhel4_64Gast | Red Hat Enterprise Linux 4 (64-Bit) |
rhel4Guest | RedHat Enterprise Linux 4 |
rhel5_64Gast | Red Hat Enterprise Linux 5 (64 Bit) (experimentell) |
rhel5Gast | RedHat Enterprise Linux 5 |
rhel6_64Gast | Red Hat Enterprise Linux 6 (64-Bit) |
rhel6Gast | RedHat Enterprise Linux 6 |
sjdsGast | Sun-Java-Desktop-System |
sles10_64Gast | SUSE Linux Enterprise Server 10 (64 Bit) (experimentell) |
sles10Gast | SUSE Linux Enterprise-Server 10 |
sles11_64Gast | SUSE Linux Enterprise Server 11 (64-Bit) |
sles11Gast | SUSE Linux Enterprise-Server 11 |
sles64Gast | SUSE Linux Enterprise Server 9 (64-Bit) |
slesGast | SUSE Linux Enterprise Server 9 |
solaris10_64Gast | Solaris 10 (64 Bit) (experimental) |
solaris10Gast | Solaris 10 (32 Bit) (experimental) |
solaris6Gast | Solaris 6 |
solaris7Gast | Solaris 7 |
solaris8Gast | Solaris 8 |
solaris9Gast | Solaris 9 |
suse64Gast | SUSE-Linux (64-Bit) |
suseGast | SUSE-Linux |
turboLinux64Gast | Turbolinux (64-Bit) |
turboLinuxGast | Turbolinux |
free64Gast | Ubuntu-Linux (64-Bit) |
freeGast | Free-Linux |
unixWare7Guest | SCO UnixWare 7 |
win2000AdvServGast | Windows 2000 Advanced-Server |
win2000ProGast | Windows 2000 Professional |
win2000ServGast | Windows 2000-Server |
win31Gast | Windows 3.1 |
win95Gast | Windows 95 |
win98guest | Windows 98 |
windows7_64Gast | Windows 7 (64-Bit) |
windows7Gast | Windows 7 |
Windows7Server64Gast | Windows Server 2008 R2 (64-Bit) |
winLonghorn64Gast | Windows Longhorn (64 Bit) (experimentell) |
winLonghornGast | Windows Longhorn (experimentell) |
winMeGuest | Windows Millennium-Edition |
winNetBusinessGuest | Windows SmallBusiness-Server 2003 |
winNetDatacenter64Guest | Windows Server 2003, Datacenter Edition (64 Bit) (experimentell) |
winNetDatacenterGuest | Windows Server 2003, Datacenter-Edition |
winNetEnterprise64Guest | Windows Server 2003, Enterprise-Edition (64 Bit) |
winNetEnterpriseGuest | Windows Server 2003, Enterprise-Edition |
winNetStandard64Guest | Windows Server 2003, Standard Edition (64-Bit) |
winNetEnterpriseGuest | Windows Server 2003, Enterprise-Edition |
winNetStandard64Guest | Windows Server 2003, Standard Edition (64-Bit) |
winNetStandardGuest | Windows Server 2003 Standard Edition |
winNetWebGuest | Windows Server 2003, Web-Edition |
winNTGuest | WindowsNT4 |
winVista64Gast | Windows Vista (64-Bit) |
winVistaGuest | Windows Vista |
winXPHomeGuest | Windows XP Home-Edition |
winXPPro64Guest | Windows XP Professional Edition (64-Bit) |
winXPProGuest | Windows XP Professional |
Convert and load images¶
Use ofqemu-img
Utility can be used to convert disk images in various formats (e.g. qcow2) to VMDK format.
For example, the following command can be used to convert aqcow2 UbuntuTrusty-Cloud-Image:
$qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img\-O vmdk trusty-server-cloudimg-amd64-disk1.vmdk
VMDK disks converted byqemu-img
arealways
sparse VMDK monolithic disks with an IDE adapter type. Using the previous example of the Ubuntu Trustyimage after theqemu-img
Conversion, the command to upload the VMDK disk should be something like this:
$Create Openstack image\--container-format leer --disk-format vmdk\--Propertyvmware_disktype="sparse" \--Propertyvmware_adaptertyp="id" \trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk
Note that thevmware_disktype
is set tosparse
and thevmware_adaptertyp
is set toide
in the previous command.
If the picture didn't come fromqemu-img
utility thatvmware_disktype
andvmware_adaptertyp
could be different. To determine the type of image adapter from an image file, use the following command and search for theddb.adapterType=
Line:
$head -20 <vmdk filename>
Assuming a preassigned disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:
$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_adaptertyp="lsiLogic" \--Propertyvmware_disktype="preallocated" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-thick-scsi < ubuntuLTS-flat.vmdk
Currently, VMDK disks cannot be connected to a virtual SCSI controller for booting the operating system with an IDE adapter type, and likewise disks with any of the SCSI adapter types (e.g. busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be connected to the IDE controller can be connected. As the previous examples show, it is therefore important tovmware_adaptertyp
property right. The default adapter type is lsiLogic, which is SCSI, so you can omit thatvmware_adaptertyp
property if you are sure that the image adapter type is lsiLogic.
Highlight VMware images¶
In a mixed hypervisor environment, OpenStack Compute uses thehypervisor_typ
tag to map images to the correct hypervisor type. For VMware images, set the hypervisor type toVMware
. Other valid hypervisor types include:hyperv
,ironic
,lxc
, andqemu
.Note thatqemu
used for both QEMU and KVM hypervisor types.
$Create Openstack image\--disk-format vmdk\--container-format leather\--Propertyvmware_adaptertyp="lsiLogic" \--Propertyvmware_disktype="preallocated" \--Propertyhypervisor_typ="vmware" \--Propertyvmware_ostype="ubuntu64Gast" \ubuntu-thick-scsi < ubuntuLTS-flat.vmdk
Optimize images¶
Sparse monolithic disks download significantly faster, but have the overhead of an extra conversion step. When imported into ESX, sparse disks are converted to VMFS flat-thin provisioned disks. The download and conversion steps only affect the first instance launched that uses the sparse disk image. The converted disk image is cached, so subsequent instances using that disk image can simply use the cached version.
To avoid the conversion step (at the cost of longer download times), consider converting small-capacity disks to thin-provisioned or pre-allocated disks before loading them into the image service.
Use one of the following tools to pre-convert sparse disks.
- vSphere-CLI-Tools
Sometimes referred to as remote CLI or rCLI.
Assuming the sparse disk is exposed on a datastore accessible by an ESX host, the following command converts it to a preallocated format:
vmkfstools --server=ip_of_some_ESX_host -i \/vmfs/volumes/datastore1/sparse.vmdk \/vmfs/volumes/datastore1/converted.vmdk
Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if needed.
vmkfstools
directly on the ESX hostWhen SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX datastore via scp and the ESX host's local vmkfstools can be used to perform the conversion. After logging in to the host via ssh, run this command:
vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
vmware-vdiskmanager
vmware-vdiskmanager
is a utility that ships with VMware Fusion and VMware Workstation. The following example converts a sparse disk to reallocated format:„/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager“ -r sparse.vmdk -t 4 converted.vmdk
In the previous cases, the converted vmdk is actually a pair of files:
The descriptor file
converted.vmdk
.The actual data file of the virtual disk
converted-flat.vmdk
.
The file to be uploaded to the image service isconverted-flat.vmdk
.
handling images¶
The ESX hypervisor needs a copy of the VMDK file to boot a virtual machine. Therefore, the vCenter OpenStack Compute driver needs to download the VMDK over HTTP from the image service to a datastore visible to the hypervisor. To streamline this process, a VMDK file is cached in the datastore the first time it is used. A cached image is stored in a folder named by the image ID. Subsequent virtual machines that require the VMDK use the cached version and do not need to copy the file again from the image service.
Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor directory on the shared datastore. To avoid this copy, boot the image in linked_clone mode. To enable this mode, seevmware.use_linked_clone.
note
You can also use theimg_linked_clone
property (or estatevmware_linked_clone
) in the image service to override the linked_clonemode per image.
When you image a virtual machine from ISO with a VMDK disk, the image is created and attached to the virtual machine as a blank disk. In this caseimg_linked_clone
property for the image is simply ignored.
If multiple compute nodes are running on the same host or have a shared file system, you can allow them to use the same cache folder on the back-end datastore. To configure this action, set thecache_prefix
option in theneue.conf
File. Its value represents the name prefix of the folder where the cached images are stored.
note
This can only take effect when compute nodes are running on the same host or have a shared file system.
You can automatically delete unused images after a certain period of time. To configure this action, set these options in the :oslo.config:group`image_cache` section in theneue.conf
File:
image_cache.remove_unused_base_images
image_cache.remove_unused_original_minimum_age_seconds
Networking with VMware vSphere¶
The VMware driver supports networking with the Networking Service (Neutron). Depending on your installation, complete these configuration steps before deploying VMs:
Before deploying VMs, create a port group with the same name as the
vmware.integration_bridge
value aneue.conf
(Default isbr-int
). All VM NICs are mapped to this port group for management by the OpenStack Network Plugin.
Volumes mit VMware vSphere¶
The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used to manage volumes based on vSphere datastores. For more information on the VMware VMDK driver, see the Cinder Guide on the VMDK Driver (TODO: This has not yet been imported and published). Also an aniSCSI volume driver has limited support and can only be used for attachments.
Troubleshooting¶
Operators can troubleshoot VMware-specific errors by correlating OpenStacklogs with vCenter logs. Each RPC call made by an OpenStack driver hasanopID
which can be traced in the vCenter logs. For example, consider the following excerpt from anova-compute
Protocol:
Aug 15 07:31:09 localhost nova-compute[16683]: DEBUG oslo_vmware.service [-] Calling Folder.CreateVM_Task with opID=oslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1f {{(pid=16683 ) request_handler /opt/stack/oslo.vmware/oslo_vmware/service.py:355}}
In this case theopID
isoslo.vmware-debb6064-690e-45ac-b0ae-1b94a9638d1f
and we can get the vCenter log (usually/var/log/vmware/vpxd/vpxd.log
) so it can determine if something went wrong with theCreate VM
Operation.