Current Series Release Notes

27.0.0-93

New Features

  • VAST Data driver: new Cinder driver integrating OpenStack with VAST Data’s Storage System. Volumes are backed by VAST’s NVMe storage and accessed using VAST’s API.

  • Added a disk_geometry configuration option to volume drivers that advertises each volume’s logical_block_size and physical_block_size to the guest. The default value of ‘512’ is equivalent to the previous behavior of (logical 512, physical 512). The recommended value of 512e (logical 512, physical 4096) hints that 4k-aligned I/O is optimal while remaining compatible with older operating systems. This has the most visible impact for Windows guests that would otherwise submit 512-byte I/O, while Linux typically aligns to 4k already. On back ends such as Ceph, aligned 4k requests prevent extra read-modify-update cycles and can noticeably improve HDD-backed pool performance. Note that guests need a VirtIO driver that includes the fix from https://bugzilla.redhat.com/show_bug.cgi?id=1428641 to use this feature (2019 or newer). The default value of the disk_geometry option will change to 512e in the 2027.1 release.

  • Added two new configuration options to support region filtering in multi-region OpenStack deployments: backup_swift_region_name in the [DEFAULT] section for Swift backup driver region selection, and region_name in the [glance] section for Glance client region selection. These options allow Cinder to correctly connect to services in the specified region when multiple regions are configured in the Keystone service catalog.

  • Hitachi driver: Added Volume Replication feature. When using this feature, it requires the following software licenses in Hitachi Remote Replication:

    • Universal Replicator

    • True Copy

  • Hitachi driver: Add support for extending volumes that have snapshots associated with them.

  • Dell PowerFlex driver: Added support for limiting the number of clones from image cache volumes. A new configuration option powerflex_max_image_cache_vtree_size controls the maximum number of clones that can be created from a single image cache volume. This helps prevent reaching PowerFlex vTree snapshot limits when multiple instances are booted from the same cached image. When set to 0 (default is 0), the maximum number of clones is limited only by the PowerFlex vTree snapshot limit. When the vTree size limit is reached, the existing cache entry is deleted and a fresh cache entry is created.

Upgrade Notes

  • With the fix for Bug #2138280, starting from os-brick version 6.15.0 or later, users must retain the /opt/emc/scaleio/openstack/connector.conf file for volumes that were attached using the legacy flow prior to the 6.13.0 release, until all such legacy volumes are detached. For volumes attached using the new attachment flow, the /opt/emc/scaleio/openstack/connector.conf file is no longer required.

  • A new volume driver configuration option disk_geometry has been introduced, which controls the logical_block_size and physical_block_size advertised to the guest. The default value of ‘512’ is equivalent to the behavior of previous releases (logical 512, physical 512). However, the recommended value of 512e (logical 512, physical 4096) hints that 4k-aligned I/O is optimal while remaining compatible with older operating systems. Beginning with the 2027.1 release, the default value will be changed to 512e. The current configuration value is used immediately whenever a volume is attached, however existing libvirt VMs won’t be updated unless their configuration file is re-generated such as by detaching/attaching the volume, cold migrating the VM to another host or both live migrating (to update the configuration) and then later rebooting the guest (so that the guest re-reads the information from the hardware). Environments with Windows guests wanting to take advantage of this optimization may want to consider intentionally performing such operations to update the configuration of those guests and also ensure the guest VirtIO drivers are updated to a version from 2019 or later.

  • Two new configuration options have been added for multi-region deployments: backup_swift_region_name in the [DEFAULT] section and region_name in the [glance] section. These options are optional and default to not filtering by region (backward compatible behavior). Deployers in multi-region environments should configure these options to ensure Cinder connects to the correct regional endpoints for Glance and Swift services.

  • The WSGI script cinder-wsgi has been removed. Deployment tooling should instead reference the Python module path for this service, cinder.wsgi.api, if their chosen WSGI server supports this (gunicorn, uWSGI) or implement a .wsgi script themselves if not (mod_wsgi).

Bug Fixes

  • Hitachi driver: Enable support for VSP One B20. VSP One B20 supports ADR functionality that offers up to 4:1 data saving, and Thin Image Advanced that supports superior ROW functionality. In addition, the B20 supports vClone technology that allows for instantaneous cloning and shared data between clones.

  • Fixed the Barbican key migration code to correctly pass the region name to the Barbican client when migrating encryption keys from ConfKeyManager to Barbican. The migration code now reads barbican_region_name from the [barbican] configuration section and passes it as region_name to the barbicanclient.Client(). This ensures that key migration operations connect to the correct Barbican endpoint in multi-region OpenStack deployments.

  • NFS Driver bug #2073146: Fixed volume create failing if source image is stored in Glance using Cinder/NFS as store.

  • Bug #2106680: Configuration of the volume_clear_ionice option now behaves consistently with the other volume_clear_* options. Keep in mind that the volume_clear_* options apply only to the LVM driver and only when thick volumes are being used. See the data leakage section of the Security page of the Cinder Administration Guide for more information.

  • The NetApp driver supports SMAS replication which on ONTAP is supported only from ONTAP 9.15.1. Additional validations are added to the driver to ensure ONTAP version checks and initialization of SnapMirror as part of creation of SMAS relationship.

  • Bug #2119123:

    Fix for NetApp Cinder Driver - Active Sync issues

  • Bug #2121812: Default node model to empty string in netapp driver.

  • NetApp driver bug #2125054: Fixed default_filter_fucntion for NetApp driver to maintain the consistency across different platforms like ONTAP9 and ASAr2.

    Admin can add a custom filter to impose the total_volumes limit for ONTAP backends like below. filter_function=capabilities.total_volumes < <max_volumes_admin_wants>

    Note: The admin needs to configure the scheduler_default_filters to include the DriverFilter as well under [DEFAULT] stanza as part of cinder.conf, please refer [1] for the default filter list.

    [1] https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/cinder.conf.html

  • NetApp Driver bug #2124264: Pooling for the ASAr2 cluster is failing because it is using the ONTAP-9 libraries to retrieve the list of volumes and namespaces, and their sizes. The corresponding ASAr2 cluster libraries have been added to ensure the correct method is called, preventing pooling failures.

  • NetApp Driver bug #2128652: Implemented logic to update the performance and deduplication statistics of backend pools at configurable intervals. Introduced two new options for the NetApp driver netapp_performance_cache_expiry_duration and netapp_dedupe_cache_expiry_duration. These options control the frequency of performance and deduplication data retrieval from ONTAP.

    These options allow fine-tuning of the frequency at which performance and deduplication metrics are queried. Since, both the delete volume workflow and the periodic interval task update the volume stats, running them simultaneously can degrade storage backend performance. Users can adjust these parameters to ensure that performance and deduplication metrics are fetched at configurable intervals, rather than during every periodic task or volume delete workflow.

    Performance statistics updates are further optimized by caching the performance object list information, which reduces the number of backend calls.

  • Bug #2132083: When creating a volume from an image, a storage connectivity failure or similar could result in the local block device node disappearing. In this situation, avoid writing the image to a file in /dev/ when calling qemu-img.

  • Bug #2137642: Fixed issue where excessively large volume sizes could overflow database fields, causing subsequent volume operations to fail.

  • PowerFlex Driver Bug #2138280: Fixed an issue where detach operations failed for volumes attached before the 6.13.0 release that did not include an sdc_guid in the connector information. From os-brick version 6.15.0 or later, detach operations for these legacy volumes are handled by os-brick to ensure backward compatibility.

  • Bug #2121361: Fixed issue when performing optimized image-volume clone using a backend with clone_across_pools capability enabled.

  • Fix NetApp driver failing to unmap a volume that is not mapped. (bug 1880862).

  • Fixed the Glance client endpoint selection logic to properly filter endpoints by region when region_name is configured in the [glance] section. The previous implementation could incorrectly append endpoints when region_name was not configured. Now, when region_name is set, Cinder will only use endpoints that match the specified region. If no matching endpoint is found, it falls back to the first available endpoint. When region_name is not configured, Cinder uses the first endpoint (backward compatible behavior). This ensures correct behavior in multi-region OpenStack deployments.

  • Bug #2056081: When cloning a volume from a source volume, we used the user ID of source volume user and not of the current user. Now we use the user ID as provided in the user context that initiated the operation.

  • HPE 3PAR driver Bug #2125037: Fixed: session share issue which causes attachment operations to occasionally fail with a Conflict error

  • Bug #2104334: Fixed optimized migration path for LVM volumes during retype.

  • HPE Nimble driver. Report thin provisioning capability correctly according to backend configuration.