"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "doc/source/configuration/block-storage/drivers/dell-emc-powermax-driver.rst" between
cinder-17.0.0.tar.gz and cinder-17.0.1.tar.gz

About: OpenStack Cinder (Core Service: Block Storage) provides persistent block storage to running instances. Its pluggable driver architecture facilitates the creation and management of block storage devices.
The "Victoria" series (latest release).

dell-emc-powermax-driver.rst  (cinder-17.0.0):dell-emc-powermax-driver.rst  (cinder-17.0.1)
skipping to change at line 30 skipping to change at line 30
8000, VMAX All Flash 250F, 450F, 850F and 950F and VMAX Hybrid. Please note 8000, VMAX All Flash 250F, 450F, 850F and 950F and VMAX Hybrid. Please note
there will be extended support of the VMAX Hybrid series until further there will be extended support of the VMAX Hybrid series until further
notice. notice.
System requirements and licensing System requirements and licensing
================================= =================================
The Dell EMC PowerMax Cinder driver supports the VMAX-3 hybrid series, VMAX The Dell EMC PowerMax Cinder driver supports the VMAX-3 hybrid series, VMAX
All-Flash series and the PowerMax arrays. All-Flash series and the PowerMax arrays.
The array operating system software, Solutions Enabler 9.1.x series, and The array operating system software, Solutions Enabler 9.2.x series, and
Unisphere for PowerMax 9.1.x series are required to run Dell EMC PowerMax Unisphere for PowerMax 9.2.x series are required to run Dell EMC PowerMax
Cinder driver. Cinder driver.
Download Solutions Enabler and Unisphere from the Dell EMC's support web site Download Solutions Enabler and Unisphere from the Dell EMC's support web site
(login is required). See the ``Dell EMC Solutions Enabler 9.1.x Installation (login is required). See the ``Dell EMC Solutions Enabler 9.2.x Installation
and Configuration Guide`` and ``Dell EMC Unisphere for PowerMax Installation and Configuration Guide`` and ``Dell EMC Unisphere for PowerMax Installation
Guide`` at the `Dell EMC Support`_ site. Guide`` at the `Dell EMC Support`_ site.
.. note:: .. note::
While it is not explicitly documented which OS versions should be While it is not explicitly documented which OS versions should be
installed on a particular array, it is recommended to install the latest installed on a particular array, it is recommended to install the latest
PowerMax OS as supported by Unisphere for PowerMax, that the PowerMax PowerMax OS as supported by Unisphere for PowerMax, that the PowerMax
driver supports for a given OpenStack release. driver supports for a given OpenStack release.
+-----------+------------------------+-------------+ +-----------+------------------------+-------------+
| OpenStack | Unisphere for PowerMax | PowerMax OS | | OpenStack | Unisphere for PowerMax | PowerMax OS |
+===========+========================+=============+ +===========+========================+=============+
| Victoria | 9.2.x | 5978.669 |
+-----------+------------------------+-------------+
| Ussuri | 9.1.x | 5978.479 | | Ussuri | 9.1.x | 5978.479 |
+-----------+------------------------+-------------+ +-----------+------------------------+-------------+
| Train | 9.1.x | 5978.444 | | Train | 9.1.x | 5978.444 |
+-----------+------------------------+-------------+ +-----------+------------------------+-------------+
| Stein | 9.0.x | 5978.221 | | Stein | 9.0.x | 5978.221 |
+-----------+------------------------+-------------+ +-----------+------------------------+-------------+
However, a Hybrid array can only run HyperMax OS 5977, and is still However, a Hybrid array can only run HyperMax OS 5977, and is still
supported until further notice. Some functionality will not be available supported until further notice. Some functionality will not be available
in older versions of the OS. If in any doubt, please contact your customer in older versions of the OS. If in any doubt, please contact your customer
skipping to change at line 191 skipping to change at line 193
- Encrypted Volume support - Encrypted Volume support
- Extending attached volume - Extending attached volume
- Replicated volume retype support - Replicated volume retype support
- Retyping attached(in-use) volume - Retyping attached(in-use) volume
- Unisphere High Availability(HA) support - Unisphere High Availability(HA) support
- Online device expansion of a metro device - Online device expansion of a metro device
- Rapid TDEV deallocation of deletes - Rapid TDEV deallocation of deletes
- Multiple replication devices - Multiple replication devices
- PowerMax array and storage group tagging - PowerMax array and storage group tagging
- Short host name and port group templates - Short host name and port group templates
- Snap id support
- Seamless Live Migration from SMI-S support
- Port group & port performance load balancing
.. note::
In certain cases, when creating a volume from a source snapshot or
source volume, subsequent operations using the volumes may fail due to
a missing snap_name exception. A manual refresh on the connected
Unisphere instance or waiting until another operation automatically
refreshes the connected Unisphere instance, will alleviate this issue.
PowerMax naming conventions PowerMax naming conventions
=========================== ===========================
.. note:: .. note::
``shortHostName`` will be altered using the following formula, if its length ``shortHostName`` will be altered using the following formula, if its length
exceeds 16 characters. This is because the storage group and masking view exceeds 16 characters. This is because the storage group and masking view
names cannot exceed 64 characters: names cannot exceed 64 characters:
skipping to change at line 317 skipping to change at line 330
---------------- ----------------
#. Download Solutions Enabler from `Dell EMC Support`_ and install it. #. Download Solutions Enabler from `Dell EMC Support`_ and install it.
You can install Solutions Enabler on a non-OpenStack host. Supported You can install Solutions Enabler on a non-OpenStack host. Supported
platforms include different flavors of Windows, Red Hat, and SUSE Linux. platforms include different flavors of Windows, Red Hat, and SUSE Linux.
Solutions Enabler can be installed on a physical server, or as a Virtual Solutions Enabler can be installed on a physical server, or as a Virtual
Appliance (a VMware ESX server VM). Additionally, starting with HYPERMAX Appliance (a VMware ESX server VM). Additionally, starting with HYPERMAX
OS Q3 2015, you can manage VMAX3 arrays using the Embedded Management OS Q3 2015, you can manage VMAX3 arrays using the Embedded Management
(eManagement) container application. See the ``Dell EMC Solutions Enabler (eManagement) container application. See the ``Dell EMC Solutions Enabler
9.1.x Installation and Configuration Guide`` on `Dell EMC Support`_ for 9.2.x Installation and Configuration Guide`` on `Dell EMC Support`_ for
more details. more details.
.. note:: .. note::
You must discover storage arrays before you can use the PowerMax drivers. You must discover storage arrays before you can use the PowerMax drivers.
Follow instructions in ``Dell EMC Solutions Enabler 9.1.x Installation Follow instructions in ``Dell EMC Solutions Enabler 9.2.x Installation
and Configuration Guide`` on `Dell EMC Support`_ for more details. and Configuration Guide`` on `Dell EMC Support`_ for more details.
#. Download Unisphere from `Dell EMC Support`_ and install it. #. Download Unisphere from `Dell EMC Support`_ and install it.
Unisphere can be installed in local, remote, or embedded configurations Unisphere can be installed in local, remote, or embedded configurations
- i.e., on the same server running Solutions Enabler; on a server - i.e., on the same server running Solutions Enabler; on a server
connected to the Solutions Enabler server; or using the eManagement connected to the Solutions Enabler server; or using the eManagement
container application (containing Solutions Enabler and Unisphere for container application (containing Solutions Enabler and Unisphere for
PowerMax). See ``Dell EMC Solutions Enabler 9.1.x Installation and PowerMax). See ``Dell EMC Solutions Enabler 9.2.x Installation and
Configuration Guide`` at `Dell EMC Support`_. Configuration Guide`` at `Dell EMC Support`_.
2. FC zoning with PowerMax 2. FC zoning with PowerMax
-------------------------- --------------------------
Zone Manager is required when there is a fabric between the host and array. Zone Manager is required when there is a fabric between the host and array.
This is necessary for larger configurations where pre-zoning would be too This is necessary for larger configurations where pre-zoning would be too
complex and open-zoning would raise security concerns. complex and open-zoning would raise security concerns.
3. iSCSI with PowerMax 3. iSCSI with PowerMax
skipping to change at line 356 skipping to change at line 369
on all Compute nodes. on all Compute nodes.
.. note:: .. note::
You can only ping the PowerMax iSCSI target ports when there is a valid You can only ping the PowerMax iSCSI target ports when there is a valid
masking view. An attach operation creates this masking view. masking view. An attach operation creates this masking view.
4. Configure block storage in cinder.conf 4. Configure block storage in cinder.conf
----------------------------------------- -----------------------------------------
.. note::
VMAX driver was rebranded to PowerMax in Stein, so some of the driver
specific tags have also changed. Legacy tags like ``vmax_srp``,
``vmax_array``, ``vmax_service_level`` and ``vmax_port_group``, as well
as the old driver location, will continue to work until the 'V' release.
.. config-table:: .. config-table::
:config-target: PowerMax :config-target: PowerMax
cinder.volume.drivers.dell_emc.powermax.common cinder.volume.drivers.dell_emc.powermax.common
.. note:: .. note::
``san_api_port`` is ``8443`` by default but can be changed if ``san_api_port`` is ``8443`` by default but can be changed if
necessary. For the purposes of this documentation the default is necessary. For the purposes of this documentation the default is
assumed so the tag will not appear in any of the ``cinder.conf`` assumed so the tag will not appear in any of the ``cinder.conf``
extracts below. extracts below.
.. note:: .. note::
PowerMax ``PortGroups`` must be pre-configured to expose volumes managed PowerMax ``PortGroups`` must be pre-configured to expose volumes managed
by the array. Port groups can be supplied in ``cinder.conf``, or by the array. Port groups can be supplied in ``cinder.conf``, or
can be specified as an extra spec ``storagetype:portgroupname`` on a can be specified as an extra spec ``storagetype:portgroupname`` on a
volume type. The latter gives the user more control. When a dynamic volume type. If a port group is set on a volume type as an extra
masking view is created by the PowerMax driver, if there is no port group specification it takes precedence over any port groups set in
specified as an extra specification, the port group is chosen randomly ``cinder.conf``. For more information on port and port group selection
from the PortGroup list, to evenly distribute load across the set of please see the section ``port group & port load balancing``.
groups provided.
.. note::
PowerMax ``SRP`` cannot be changed once configured and in-use. SRP renaming
on the PowerMax array is not supported.
.. note:: .. note::
Service Level can be added to ``cinder.conf`` when the backend is the Service Level can be added to ``cinder.conf`` when the backend is the
default case and there is no associated volume type. This not a recommended default case and there is no associated volume type. This not a recommended
configuration as it is too restrictive. Workload is ``NONE`` for PowerMax configuration as it is too restrictive. Workload is ``NONE`` for PowerMax
and any All Flash with PowerMax OS (5978) or greater. and any All Flash with PowerMax OS (5978) or greater.
+--------------------+----------------------------+----------+----------+ +--------------------+----------------------------+----------+----------+
| PowerMax parameter | cinder.conf parameter | Default | Required | | PowerMax parameter | cinder.conf parameter | Default | Required |
skipping to change at line 1385 skipping to change at line 1395
$ virsh list --all $ virsh list --all
Id Name State Id Name State
-------------------------------- --------------------------------
1 instance-00000006 Running 1 instance-00000006 Running
#. Migrate the instance from ``HostB`` to ``HostA`` with: #. Migrate the instance from ``HostB`` to ``HostA`` with:
.. code-block:: console .. code-block:: console
$ openstack server migrate --live HostA \ $ openstack server migrate --os-compute-api-version 2.30 \
--live-migration --host HostA \
server_lm_1 server_lm_1
#. Run the command on step 3 above when the instance is back in available #. Run the command on step 3 above when the instance is back in available
status. The hypervisor should be on Host A. status. The hypervisor should be on Host A.
#. Run the command on Step 4 on Host A to confirm that the instance is #. Run the command on Step 4 on Host A to confirm that the instance is
created through ``virsh``. created through ``virsh``.
14. Multi-attach support 14. Multi-attach support
------------------------ ------------------------
skipping to change at line 1631 skipping to change at line 1642
19. PowerMax online (in-use) device expansion 19. PowerMax online (in-use) device expansion
--------------------------------------------- ---------------------------------------------
.. table:: .. table::
+---------------------------------+------------------------------------------ -+ +---------------------------------+------------------------------------------ -+
| uCode Level | Supported In-Use Volume Extend Operations | | uCode Level | Supported In-Use Volume Extend Operations |
+----------------+----------------+--------------+--------------+------------ -+ +----------------+----------------+--------------+--------------+------------ -+
| R1 uCode Level | R2 uCode Level | Sync | Async | Metro | | R1 uCode Level | R2 uCode Level | Sync | Async | Metro |
+================+================+==============+==============+============ =+ +================+================+==============+==============+============ =+
| 5978.669 | 5978.669 | Y | Y | Y
|
+----------------+----------------+--------------+--------------+------------
-+
| 5978.669 | 5978.444 | Y | Y | Y
|
+----------------+----------------+--------------+--------------+------------
-+
| 5978.669 | 5978.221 | Y | Y | N
|
+----------------+----------------+--------------+--------------+------------
-+
| 5978.444 | 5978.444 | Y | Y | Y | | 5978.444 | 5978.444 | Y | Y | Y |
+----------------+----------------+--------------+--------------+------------ -+ +----------------+----------------+--------------+--------------+------------ -+
| 5978.444 | 5978.221 | Y | Y | N | | 5978.444 | 5978.221 | Y | Y | N |
+----------------+----------------+--------------+--------------+------------ -+ +----------------+----------------+--------------+--------------+------------ -+
| 5978.221 | 5978.221 | Y | Y | N | | 5978.221 | 5978.221 | Y | Y | N |
+----------------+----------------+--------------+--------------+------------ -+ +----------------+----------------+--------------+--------------+------------ -+
Assumptions, restrictions and prerequisites Assumptions, restrictions and prerequisites
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ODE in the context of this document refers to extending a volume where it - ODE in the context of this document refers to extending a volume where it
is in-use, that is, attached to an instance. is in-use, that is, attached to an instance.
- The ``allow_extend`` is only applicable on Hybrid arrays or All Flash arrays - The ``allow_extend`` is only applicable on Hybrid arrays or All Flash arrays
with HyperMax OS. If included elsewhere, it is ignored. with HyperMax OS. If included elsewhere, it is ignored.
- Where one array is a lower uCode than the other, the environment is limited - Where one array is a lower uCode than the other, the environment is limited
to functionality of that of the lowest uCode level, i.e. if R1 is 5978.444 to functionality of that of the lowest uCode level, i.e. if R1 is 5978.444
and R2 is 5978.221, expanding a metro volume is not supported, both R1 and and R2 is 5978.221, expanding a metro volume is not supported, both R1 and
R2 need to be on 5978.444 uCode. R2 need to be on 5978.444 uCode at a minimum.
20. PowerMax array and storage group tagging 20. PowerMax array and storage group tagging
-------------------------------------------- --------------------------------------------
Unisphere for PowerMax 9.1 supports tagging of storage groups and arrays, Unisphere for PowerMax 9.1 and later supports tagging of storage groups and
so the user can give their own 'tag' for ease of searching and/or grouping. arrays, so the user can give their own 'tag' for ease of searching and/or
grouping.
Assumptions, restrictions and prerequisites Assumptions, restrictions and prerequisites
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The storage group tag(s) is associated with a volume type extra spec key - The storage group tag(s) is associated with a volume type extra spec key
``storagetype:storagegrouptags``. ``storagetype:storagegrouptags``.
- The array tag is associated with the backend stanza using key - The array tag is associated with the backend stanza using key
``powermax_array_tag_list``. It expects a list of one or more comma ``powermax_array_tag_list``. It expects a list of one or more comma
separated values, for example separated values, for example
``powermax_array_tag_list=[value1,value2, value3]`` ``powermax_array_tag_list=[value1,value2, value3]``
skipping to change at line 1712 skipping to change at line 1730
group name that are contained in the PowerMax driver storage groups and group name that are contained in the PowerMax driver storage groups and
masking views names. For current functionality please refer to masking views names. For current functionality please refer to
`PowerMax naming conventions`_ for more details. `PowerMax naming conventions`_ for more details.
As the storage group name and masking view name are limited to 64 characters As the storage group name and masking view name are limited to 64 characters
the short host name needs to be truncated to 16 characters or less and port the short host name needs to be truncated to 16 characters or less and port
group needs to be truncated to 12 characters or less. This functionality group needs to be truncated to 12 characters or less. This functionality
offers a little bit more flexibility to determine how these truncated offers a little bit more flexibility to determine how these truncated
components should look. components should look.
.. note::
Once the port group and short host name have been overridden with any
new format, it is not possible to return to the default format or change
to another format if any volumes are in an attached state. This is because
there is no way to determine the overridden format once
``powermax_short_host_name_template` or ``powermax_port_group_name_template``
have been removed or changed.
Assumptions, restrictions, and prerequisites Assumptions, restrictions, and prerequisites
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Backward compatibility with old format is preserved. - Backward compatibility with old format is preserved.
- ``cinder.conf`` will have 2 new configuration options, - ``cinder.conf`` will have 2 new configuration options,
``short_host_name_template`` and ``port_group_name_template``. ``short_host_name_template`` and ``port_group_name_template``.
- If a storage group, masking view or initiator group in the old naming - If a storage group, masking view or initiator group in the old naming
convention already exists, this remains and any new attaches will use convention already exists, this remains and any new attaches will use
the new naming convention where the label for the short host name the new naming convention where the label for the short host name
and/or port group has been customized by the user. and/or port group has been customized by the user.
skipping to change at line 1800 skipping to change at line 1827
| e.g. | group name and x uuid | | | e.g. | group name and x uuid | |
| portGroupName[-6:]uuid[:5] | characters created from md5 | | | portGroupName[-6:]uuid[:5] | characters created from md5 | |
| | hash of port group name | | | | hash of port group name | |
+-----------------------------------+-------------------------------------+-- ----------------------------------+ +-----------------------------------+-------------------------------------+-- ----------------------------------+
| portGroupName[-x:]userdef | Last x characters of the port | M ust be less than 12 characters | | portGroupName[-x:]userdef | Last x characters of the port | M ust be less than 12 characters |
| e.g. | group name and a user defined x char| | | e.g. | group name and a user defined x char| |
| portGroupName[-6:]-test | name. NB - the responsibility is on | | | portGroupName[-6:]-test | name. NB - the responsibility is on | |
| | the user for uniqueness | | | | the user for uniqueness | |
+-----------------------------------+-------------------------------------+-- ----------------------------------+ +-----------------------------------+-------------------------------------+-- ----------------------------------+
21. Snap ids replacing generations
----------------------------------
Snap ids were introduced to the PowerMax in microcde 5978.669.669 and
Unisphere for PowerMax 9.2. Generations existed previously and could cause
stale data if deleted out of sequence, even though we locked against this
occurence. This happened when the newer generation(s) inherited its deleted
predecessors generation number. So in a series of 0, 1, 2 and 3 generations,
if generation 1 gets deleted, generation 2 now becomes generation 1 and
generation 3 becomes generation 2 and so on down the line.
Snap ids are unique to each snapVX and will not change once assigned at
creation so out of sequence deletions are no longer an issue.
Generations will remain for arrays with microcode less than 5978.669.669.
Cinder supported operations Cinder supported operations
=========================== ===========================
Volume replication support Volume replication support
-------------------------- --------------------------
Configure a single replication target Configure a single replication target
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Configure an SRDF group between the chosen source and target #. Configure an SRDF group between the chosen source and target
skipping to change at line 1842 skipping to change at line 1883
.. note:: .. note::
If you are setting up an SRDF/Metro configuration, it is recommended that If you are setting up an SRDF/Metro configuration, it is recommended that
you configure a Witness or vWitness for bias management. Please see the you configure a Witness or vWitness for bias management. Please see the
`SRDF Metro Overview & Best Practices`_ guide for more information. `SRDF Metro Overview & Best Practices`_ guide for more information.
.. note:: .. note::
The PowerMax Cinder drivers do not support Cascaded SRDF. The PowerMax Cinder drivers do not support Cascaded SRDF.
.. note::
The transmit idle functionality must be disabled on the R2 array for
Asynchronous rdf groups. If this is not disabled it will prevent failover
promotion in the event of access to the R1 array being lost.
.. code-block:: console
# symrdf -sid <sid> -rdfg <rdfg> set rdfa -transmit_idle off
.. note::
When creating RDF enabled volumes, if there are existing volumes in the
target storage group, all rdf pairs related to that storage group must
have the same rdf state i.e. rdf pair states must be consistent across
all volumes in a storage group when attempting to create a new replication
enabled volume. If mixed rdf pair states are found during a volume creatio
n
attempt, an error will be raised by the rdf state validation checks.
In this event, please wait until all volumes in the storage group have
reached a consistent state.
#. Enable replication in ``/etc/cinder/cinder.conf``. #. Enable replication in ``/etc/cinder/cinder.conf``.
To enable the replication functionality in PowerMax Cinder driver, it is To enable the replication functionality in PowerMax Cinder driver, it is
necessary to create a replication volume-type. The corresponding necessary to create a replication volume-type. The corresponding
back-end stanza in ``cinder.conf`` for this volume-type must then back-end stanza in ``cinder.conf`` for this volume-type must then
include a ``replication_device`` parameter. This parameter defines a include a ``replication_device`` parameter. This parameter defines a
single replication target array and takes the form of a list of key single replication target array and takes the form of a list of key
value pairs. value pairs.
.. code-block:: console .. code-block:: console
skipping to change at line 2118 skipping to change at line 2180
configurations. configurations.
In the event of a disaster, or where there is required downtime, upgrade In the event of a disaster, or where there is required downtime, upgrade
of the primary array for example, the administrator can issue the failover of the primary array for example, the administrator can issue the failover
host command to failover to the configured target: host command to failover to the configured target:
.. code-block:: console .. code-block:: console
# cinder failover-host cinder_host@POWERMAX_FC_REPLICATION # cinder failover-host cinder_host@POWERMAX_FC_REPLICATION
.. note::
In cases where multiple replication devices are enabled, a backend_id must
be specified during initial failover. This can be achieved by appending
``--backend_id <backend_id>`` to the failover command above. The backend_id
specified must match one of the backend_ids specified in ``cinder.conf's``
``replication_device's``.
After issuing ``cinder failover-host`` Cinder will set the R2 array as the After issuing ``cinder failover-host`` Cinder will set the R2 array as the
target array for Cinder, however, to get existing instances to use this new target array for Cinder, however, to get existing instances to use this new
array and paths to volumes it is necessary to first shelve Nova instances and array and paths to volumes it is necessary to first shelve Nova instances and
then unshelve them, this will effectively restart the Nova instance and then unshelve them, this will effectively restart the Nova instance and
re-establish data paths between Nova instances and the volumes on the R2 array. re-establish data paths between Nova instances and the volumes on the R2 array.
.. code-block:: console .. code-block:: console
# nova shelve <server> # nova shelve <server>
# nova unshelve [--availability-zone <availability_zone>] <server> # nova unshelve [--availability-zone <availability_zone>] <server>
skipping to change at line 2154 skipping to change at line 2208
.. code-block:: console .. code-block:: console
# cinder failover-host cinder_host@POWERMAX_FC_REPLICATION --backend_id defau lt # cinder failover-host cinder_host@POWERMAX_FC_REPLICATION --backend_id defau lt
After issuing the failover command to revert to the default backend host it is After issuing the failover command to revert to the default backend host it is
necessary to re-issue the Nova shelve and unshelve commands to restore the necessary to re-issue the Nova shelve and unshelve commands to restore the
data paths between Nova instances and their corresponding back end volumes. data paths between Nova instances and their corresponding back end volumes.
Once reverted to the default backend volume and snapshot provisioning Once reverted to the default backend volume and snapshot provisioning
operations can continue as normal. operations can continue as normal.
Failover promotion
~~~~~~~~~~~~~~~~~~
Failover promotion can be used to transfer all existing RDF enabled volumes
to the R2 array and overwrite any references to the original R1 array. This
can be used in the event of total R1 array failure or in other cases where
an array transfer is warranted. If the R1 array is online and working and the
RDF links are still enabled the failover promotion will automatically delete
rdf pairs as necessary. If the R1 array or the link to the R1 array is down,
a half deletepair must be issued manually for those volumes during the
failover promotion.
1. Issue failover command:
.. code-block:: console
# cinder failover-host <host>
2. Enable array promotion:
.. code-block:: console
# cinder failover-host --backend_id=pmax_failover_start_array_promotion <host
>
3. View and re-enable the cinder service
.. code-block:: console
# cinder service-list
# cinder service-enable <host> <binary>
4. Remove all volumes from volume groups
.. code-block:: console
# cinder --os-volume-api-version 3.13 group-update --remove-volumes <Vol1ID,
etc..> <volume_group_name>
5. Detach all volumes that are attached to instances
.. code-block:: console
# openstack server remove volume <instance_id> <volume_id>
.. note::
Deleting the instance will call a detach volume for each attached volume.
A terminate connection can be issued manually using the following command
for volumes that are stuck in the attached state without an instance.
.. code-block:: console
# cinder --os-volume-api-version 3.50 attachment-delete <attachment_id>
6. Delete all remaining instances
.. code-block:: console
# nova delete <instance_id>
7. Create new volume types
New volume types must be created with references to the remote array. All new
volume types must adhere to the following guidelines:
.. code-block:: text
1. Uses the same workload, SLO & compression setting as the previous R1 volu
me type.
2. Uses the remote array instead of the primary for its pool name.
3. Uses the same volume_backend_name as the previous volume type.
4. Must not have replication enabled.
Example existing volume type extra specs.
.. code-block:: text
pool_name='Gold+None+SRP_1+000297900330', replication_enabled='<is> True',
storagetype:replication_device_backend_id='async-rep-1', volume_backend_name=
'POWERMAX_ISCSI_NONE'
Example new volume type extra specs.
.. code-block:: text
pool_name='Gold+None+SRP_1+000197900049', volume_backend_name='POWERMAX_ISCSI
_NONE'
8. Retype volumes to new volume types
Additional checks will be performed during failover promotion retype to ensure
workload, compression and slo settings meet the criteria specified above when
creating the new volume types.
.. code-block:: console
# cinder retype --migration-policy on-demand <volume> <volume_type>
.. note::
If the volumes RDF links are offline during this retype then a half deletepai
r
must be performed manually after retype. Please reference section 8.a. below
for guidance on this process.
8.a. Retype and RDF half deletepair
In instances where the rdf links are offline and rdf pairs have been set to
partitioned state there are additional requirements. In that scenario the
following order should be adhered to:
.. code-block:: text
1. Retype all Synchronous volumes.
2. Half_deletepair all Synchronous volumes using the default storage group.
3. Retype all Asynchronous volumes.
4. Half_deletepair all Asynchronous volumes using their management storage gr
oup.
5. Retype all Metro volumes.
6. Half_deletepair all Metro volumes using their management storage group.
7. Delete the Asynchronous and Metro management storage groups.
.. note::
A half deletepair cannot be performed on Metro enabled volumes unless the
symforce option has been enabled in the symapi options. In symapi/config/opti
ons
uncomment and set 'SYMAPI_ALLOW_RDF_SYMFORCE = True'.
.. code-block:: console
# symrdf -sid <sid> -sg <sg> -rdfg <rdfg> -force -symforce half_deletepair
9. Issue failback
Issuing the failback command will disable both the failover and promotion
flags. Please ensure all volumes have been retyped and all replication pairs
have been deleted before issuing this command.
.. code-block:: console
# cinder failover-host --backend_id default <host>
10. Update cinder.conf
Update the cinder.conf file to include details for the new primary array. For
more information please see the Configure block storage in cinder.conf section
of this documentation.
11. Restart the cinder services
Restart the cinder volume service to allow it to detect the changes made to
the cinder.conf file.
12. Set Metro volumes to ready state
Metro volumes will be set to a Not Ready state after performing rdf pair
cleanup. Set these volumes back to Ready state to allow them to be attached
to instances. The U4P instance must be restarted for this change to be
detected.
.. code-block:: console
# symdev -sid <sid> ready -devs <dev_id1, dev_id2>
Asynchronous and metro replication management groups Asynchronous and metro replication management groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Asynchronous and metro volumes in an RDF session, i.e. belonging to an SRDF Asynchronous and metro volumes in an RDF session, i.e. belonging to an SRDF
group, must be managed together for RDF operations (although there is a group, must be managed together for RDF operations (although there is a
``consistency exempt`` option for creating and deleting pairs in an Async ``consistency exempt`` option for creating and deleting pairs in an Async
group). To facilitate this management, we create an internal RDF management group). To facilitate this management, we create an internal RDF management
storage group on the backend. This RDF management storage group will use the storage group on the backend. This RDF management storage group will use the
following naming convention: following naming convention:
skipping to change at line 2201 skipping to change at line 2413
Volume retype - storage assisted volume migration Volume retype - storage assisted volume migration
-------------------------------------------------- --------------------------------------------------
Volume retype with storage assisted migration is supported now for Volume retype with storage assisted migration is supported now for
PowerMax arrays. Cinder requires that for storage assisted migration, a PowerMax arrays. Cinder requires that for storage assisted migration, a
volume cannot be retyped across backends. For using storage assisted volume volume cannot be retyped across backends. For using storage assisted volume
retype, follow these steps: retype, follow these steps:
.. note:: .. note::
The Ussuri release of OpenStack supports retyping in-use volumes to and from From the Ussuri release of OpenStack the PowerMax driver supports retyping
replication enabled volume types with limited exception of volumes with in-use volumes to and from replication enabled volume types with limited
Metro replication enabled. To retype to a volume-type that is Metro enabled exception of volumes with Metro replication enabled. To retype to a
the volume **must** first be detached then retyped. The reason for this is volume-type that is Metro enabled the volume **must** first be detached
so the paths from the instance to the Metro R1 & R2 volumes must be then retyped. The reason for this is so the paths from the instance to the
initialised, this is not possible on the R2 device whilst a volume is Metro R1 & R2 volumes must be initialised, this is not possible on the R2
attached. device whilst a volume is attached.
.. note:: .. note::
When multiple replication devices are configured. If retyping from one When multiple replication devices are configured. If retyping from one
replication mode to another the R1 device ID is preserved and a new replication mode to another the R1 device ID is preserved and a new
R2 side device is created. As a result, the device ID on the R2 array R2 side device is created. As a result, the device ID on the R2 array
may be different after the retype operation has completed. may be different after the retype operation has completed.
.. note:: .. note::
skipping to change at line 2675 skipping to change at line 2887
Unisphere. Unisphere.
Cinder backup support Cinder backup support
--------------------- ---------------------
PowerMax Cinder driver support Cinder backup functionality. For further PowerMax Cinder driver support Cinder backup functionality. For further
information on setup, configuration and usage please see the official information on setup, configuration and usage please see the official
OpenStack `volume backup`_ documentation and related `volume backup CLI`_ OpenStack `volume backup`_ documentation and related `volume backup CLI`_
guide. guide.
Port group & port load balancing
--------------------------------
By default port groups are selected at random from ``cinder.conf`` when
connections are initialised between volumes on the backend array and
compute instances in Nova. If a port group is set in the volume type extra
specifications this will take precedence over any port groups configured in
``cinder.conf``. Port selection within the chosen port group is also selected
at random by default.
With port group and port load balancing in the PowerMax for Cinder driver users
can now select the port group and port load by determining which has the lowest
load. The load metric is defined by the user in both instances so the selection
process can better match the needs of the user and their environment. Available
metrics are detailed in the ``performance metrics`` section.
Port Groups are reported on at five minute time deltas (diagnostic), and FE
Ports are reported on at one minute time deltas (real-time) if real-time
metrics are enabled, else default five minute time delta (diagnostic). The
window at which performance metrics are analysed is a user-configured option in
``cinder.conf``, this is detailed in the ``configuration`` section.
Calculating load
~~~~~~~~~~~~~~~~
The process by which Port Group or Port load is calculated is the same for
both. The user specifies the look back window which determines how many
performance intervals to measure, 60 minutes will give 12 intervals of 5
minutes each for example. If no lookback window is specified or is set to
0 only the most recent performance metric will be analysed. This will give a
slight performance improvement but with the improvements made to the
performance REST endpoints for load this improvement is negligible.
For real-time stats a minimum of 1 minute is required.
Once a call is made to the performance REST endpoints, the performance data for
that PG or port is extracted. Then the metric values are summed and divided by
the count of intervals to get the average for the look back window.
The performance metric average value for each asset is added to a Python heap.
Once all assets have been measured the lowest value will always be at position
0 in the heap so there is no extra time penalty requirement for search.
Pre-requisites
~~~~~~~~~~~~~~
Before load balancing can be enabled in the PowerMax for Cinder driver
performance metrics collection must be enabled in Unisphere. Real-time
performance metrics collection is enabled separately from diagnostic metrics
collection. Performance metric collection is only available for local arrays
in Unisphere.
After performance metrics registration there is a time delay before Unisphere
records performance metrics, adequate time must be given before enabling load
balancing in Cinder else default random selection method will be used. It is
recommended to wait 4 hours after performance registration before enabling
load balancing in Cinder.
Configuration
~~~~~~~~~~~~~
A number of configuration options are available for users so load balancing
can be set to better suit the needs of the environment. These configuration
options are detailed in the table below.
.. table:: Load balance cinder.conf configuration options
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``cinder.conf parameter`` | options | Default | Descripti
on |
+=============================+================+=================+==========
==============================+
| ``load_balance`` | ``True/False`` | ``False`` | | Enable/
disable load balancing for |
| | | | | a Power
Max backend. |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``load_balance_real_time`` | ``True/False`` | ``False`` | | Enable/
disable real-time performance |
| | | | | metrics
for Port level metrics |
| | | | | (not av
ailable for Port Group). |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``load_data_format`` | ``Avg/Max`` | ``Avg`` | | Perform
ance data format, not |
| | | | | applica
ble for real-time. |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``load_lookback`` | ``int`` | ``60`` | | How far
in minutes to look back for |
| | | | | diagnos
tic performance metrics in |
| | | | | load ca
lculation, minimum of 0 |
| | | | | maximum
of 1440 (24 hours). |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``load_real_time_lookback`` | ``int`` | ``1`` | | How far
in minutes to look back for |
| | | | | real-ti
me performance metrics in |
| | | | | load ca
lculation, minimum of 1 |
| | | | | maximum
of 60 (24 hours). |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``port_group_load_metric`` | See below | ``PercentBusy`` | | Metric
used for port group load |
| | | | | calcula
tion. |
+-----------------------------+----------------+-----------------+----------
------------------------------+
| ``port_load_metric`` | See below | ``PercentBusy`` | | Metric
used for port group load |
| | | | | calcula
tion. |
+-----------------------------+----------------+-----------------+----------
------------------------------+
Port-Group Metrics
~~~~~~~~~~~~~~~~~~
.. table:: Port-group performance metrics
+-------------------+--------------------+----------------------------------
-------------------------+
| Metric | cinder.conf option | Description
|
+===================+====================+==================================
=========================+
| % Busy | ``PercentBusy`` | The percent of time the port grou
p is busy. |
+-------------------+--------------------+----------------------------------
-------------------------+
| Avg IO Size (KB) | ``AvgIOSize`` | | Calculated value: (HA Kbytes tr
ansferred per sec / |
| | | | total IOs per sec)
|
+-------------------+--------------------+----------------------------------
-------------------------+
| Host IOs/sec | ``IOs`` | | The number of host IO operation
s performed each second, |
| | | | including writes and random and
sequential reads. |
+-------------------+--------------------+----------------------------------
-------------------------+
| Host MBs/sec | ``MBs`` | The number of host MBs read each
second. |
+-------------------+--------------------+----------------------------------
-------------------------+
| MBs Read/sec | ``MBRead`` | The number of reads per second in
MBs. |
+-------------------+--------------------+----------------------------------
-------------------------+
| MBs Written/sec | ``MBWritten`` | The number of writes per second i
n MBs. |
+-------------------+--------------------+----------------------------------
-------------------------+
| Reads/sec | ``Reads`` | The average number of host reads
performed per second. |
+-------------------+--------------------+----------------------------------
-------------------------+
| Writes/sec | ``Writes`` | The average number of host writes
performed per second. |
+-------------------+--------------------+----------------------------------
-------------------------+
Port Metrics
~~~~~~~~~~~~
.. table:: Port performance metrics
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Metric | cinder.conf option | Real-Time Supported | Descr
iption |
+=====================+=======================+=====================+=======
=====================================================+
| % Busy | ``PercentBusy`` | Yes | The p
ercent of time the port is busy. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Avg IO Size (KB) | ``AvgIOSize`` | Yes | | Cal
culated value: (HA Kbytes transferred per sec / |
| | | | | tot
al IOs per sec) |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Host IOs/sec | ``IOs`` | Yes | | The
number of host IO operations performed each second, |
| | | | | inc
luding writes and random and sequential reads. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Host MBs/sec | ``MBs`` | Yes | The n
umber of host MBs read each second. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| MBs Read/sec | ``MBRead`` | Yes | The n
umber of reads per second in MBs. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| MBs Written/sec | ``MBWritten`` | Yes | The n
umber of writes per second in MBs. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Reads/sec | ``Reads`` | Yes | The n
umber of read operations performed by the port per |
| | | | secon
d. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Writes/sec | ``Writes`` | Yes | The n
umber of write operations performed each second by |
| | | | the p
ort. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Speed Gb/sec | ``SpeedGBs`` | No | Speed
. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Response Time (ms) | ``ResponseTime`` | No | The a
verage response time for the reads and writes. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Read RT (ms) | ``ReadResponseTime`` | No | The a
verage time it takes to serve one read IO. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
| Write RT (ms) | ``WriteResponseTime`` | No | The a
verage time it takes to serve one write IO. |
+---------------------+-----------------------+---------------------+-------
-----------------------------------------------------+
Upgrading from SMI-S based driver to REST API based driver Upgrading from SMI-S based driver to REST API based driver
========================================================== ==========================================================
Seamless upgrades from an SMI-S based driver to REST API based driver, Seamless upgrades from an SMI-S based driver to REST API based driver,
following the setup instructions above, are supported with a few exceptions: following the setup instructions above, are supported with a few exceptions:
#. OpenStack's ``live migration`` functionality will not work on already #. Seamless upgrade from SMI-S(Ocata and earlier) to REST(Pike and later)
attached/in-use legacy volumes without first migrating the volumes to is now available on all functionality including Live Migration.
the new REST masking view structure. If you are upgrading from Newton
or Ocata to Pike or greater please contact `Dell EMC Support`_ and we
will guide you through the process.
#. Consistency groups are deprecated in Pike. Generic Volume Groups are #. Consistency groups are deprecated in Pike. Generic Volume Groups are
supported from Pike onwards. supported from Pike onwards.
.. Document Hyperlinks .. Document Hyperlinks
.. _Dell EMC Support: https://www.dell.com/support .. _Dell EMC Support: https://www.dell.com/support
.. _Openstack CLI: https://docs.openstack.org/cinder/latest/cli/cli-manage-volum es.html#volume-types .. _Openstack CLI: https://docs.openstack.org/cinder/latest/cli/cli-manage-volum es.html#volume-types
.. _over-subscription documentation: https://docs.openstack.org/cinder/latest/ad min/blockstorage-over-subscription.html .. _over-subscription documentation: https://docs.openstack.org/cinder/latest/ad min/blockstorage-over-subscription.html
.. _configuring migrations: https://docs.openstack.org/nova/latest/admin/configu ring-migrations.html .. _configuring migrations: https://docs.openstack.org/nova/latest/admin/configu ring-migrations.html
.. _live migration usage: https://docs.openstack.org/nova/latest/admin/live-migr ation-usage.html .. _live migration usage: https://docs.openstack.org/nova/latest/admin/live-migr ation-usage.html
 End of changes. 21 change blocks. 
42 lines changed or deleted 507 lines changed or added

Home  |  About  |  Features  |  All  |  Newest  |  Dox  |  Diffs  |  RSS Feeds  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTP(S)