IBM Spectrum Accelerate: Product Overview

IBM Spectrum Accelerate
Version 11.5.4
Product Overview
IBM
GC27-6700-05
Note
Before using this information and the product it supports, read the information in “Notices” on page 101.
Edition notice
Publication number: GC27-6700-05. This publication applies to the version 11.5.4 of IBM Spectrum Accelerate™ and
to all subsequent releases and modifications until otherwise indicated in a newer publication.
© Copyright IBM Corporation 2016.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . v
Chapter 3. Storage pools . . . . . . . 17
Tables . . . . . . . . . . . . . . . vii
Protecting snapshots on a storage pool level
Thin provisioning . . . . . . . . .
About this document . . . . . . . . . ix
Chapter 4. Volumes and snapshots . . 21
Purpose and scope . . . . . . .
Intended audience . . . . . .
Document conventions . . . . .
Related information and publications
Terms and abbreviations . . . .
IBM Publications Center . . . . .
Sending or posting your comments . .
Getting information, help, and service
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ix
ix
ix
ix
. . . . . x
. . . . . x
. . . . . x
. . . . . xi
Chapter 1. Introduction . . . . . . . . 1
Features and functionality . . . . . . .
Hardware . . . . . . . . . . . .
Management options . . . . . . . .
Reliability . . . . . . . . . . . .
Data mirroring. . . . . . . . . .
Self-healing mechanisms . . . . . .
Protected cache . . . . . . . . .
Performance . . . . . . . . . . .
Functionality . . . . . . . . . . .
Snapshot management . . . . . . .
Consistency groups for snapshots . . .
Storage pools . . . . . . . . . .
Remote monitoring and diagnostics. . .
SNMP . . . . . . . . . . . .
Multipathing . . . . . . . . . .
Automatic event notifications . . . . .
Management through GUI and CLI . . .
External replication mechanisms . . . .
Support for solid-state drive (SSD) caching
Upgradability . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
3
3
4
4
4
5
5
6
6
6
6
6
6
7
7
7
7
7
8
Chapter 2. Connectivity . . . . . . . . 9
IP and Ethernet connectivity . . . . .
Ethernet ports . . . . . . . . .
Management connectivity . . . . .
Interconnect connectivity . . . . .
Host system attachment . . . . . .
Dynamic rate adaptation . . . . .
Attaching volumes to hosts . . . .
Excluding LUN0. . . . . . . .
Advanced host attachment . . . .
CHAP authentication of iSCSI hosts . .
Clustering hosts into LUN maps . . .
Volume mappings exceptions . . .
Support for VMware extended operations
Writing zeroes . . . . . . . .
Hardware-assisted locking . . . .
Fast copy . . . . . . . . . .
QoS performance classes . . . . . .
© Copyright IBM Corp. 2016
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 9
. 9
. 9
. 10
. 11
. 11
. 11
. 11
. 12
. 12
. 13
. 14
. 14
. 14
. 15
. 15
. 16
The volume life cycle . . . . . . . .
Support for Symantec Storage Foundation
Reclamation . . . . . . . . . .
Snapshots . . . . . . . . . . . .
Redirect on write . . . . . . . .
Storage utilization . . . . . . . .
The snapshot auto-delete priority . . .
Snapshot name and association . . . .
The snapshot lifecycle . . . . . . .
Snapshot and snapshot group format . .
.
.
.
.
. .
Thin
. .
. .
. .
. .
. .
. .
. .
. .
. 18
. 18
. 21
.
.
.
.
.
.
.
.
22
23
23
26
26
26
26
31
Chapter 5. Consistency groups . . . . 33
Creating a consistency group . . . . .
Taking a snapshot of a Consistency Group .
The snapshot group life cycle . . . . .
Restoring a consistency group . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
33
34
35
36
Chapter 6. Synchronous remote
mirroring . . . . . . . . . . . . . . 39
Remote mirroring basic concepts . . . . . . .
Synchronous mirroring operation . . . . . . .
Synchronous mirroring configuration and activation
options . . . . . . . . . . . . . . . .
Synchronous mirroring statuses. . . . . . . .
Synchronous mirroring role switchover and role
change . . . . . . . . . . . . . . . .
Role switchover when remote mirroring is
operational . . . . . . . . . . . . .
Role switchover when remote mirroring is not
operational . . . . . . . . . . . . .
I/O operations in synchronous mirroring . . . .
Coupling synchronization process . . . . . . .
Synchronous mirroring of consistency groups . . .
39
40
41
42
45
45
46
47
48
50
Chapter 7. Asynchronous remote
mirroring . . . . . . . . . . . . . . 51
Asynchronous mirroring highlights . . . .
Snapshot-based technology in asynchronous
mirroring . . . . . . . . . . . . .
Disaster recovery scenarios in asynchronous
mirroring . . . . . . . . . . . . .
.
. 52
.
. 53
.
. 54
Chapter 8. Volume migration with IBM
Hyper-Scale Mobility . . . . . . . . . 57
The IBM Hyper-Scale Mobility process .
.
.
.
. 57
Chapter 9. Data-at-rest encryption . . . 61
HIPAA compatibility .
.
.
.
.
.
.
.
.
.
. 61
iii
Chapter 10. Data migration . . . . . . 63
I/O handling in data migration.
Data migration stages . . . .
Handling failures . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 63
. 64
. 66
LDAP authentication . . .
Switching between LDAP and
authentication modes . . .
Access control commands . .
. . .
native
. . .
. . .
.
.
.
. 78
.
.
.
.
.
.
. 83
. 84
Chapter 11. Event handling . . . . . . 67
Chapter 13. Multi-Tenancy . . . . . . 87
Event information . . . . . . . . .
Viewing events . . . . . . . . . .
Event notification rules . . . . . . .
Alerting events configuration limitations .
Defining destinations . . . . . . . .
Defining gateways . . . . . . . . .
Monitoring Spectrum Accelerate using SNMP
Multi-tenancy principles . . .
Multi-tenancy concept diagram .
Working with multi-tenancy . .
. .
. .
. .
. .
. .
. .
traps
.
.
.
.
.
.
67
68
68
69
69
69
70
Chapter 12. Access control . . . . . . 73
User roles and permission levels
Predefined users. . . . .
Application administrator .
Authentication methods . . .
Native authentication . . .
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
IBM Spectrum Accelerate: Product Overview
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
75
76
77
78
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 14. Non-disruptive code load
. 87
. 89
. 89
93
Glossary . . . . . . . . . . . . . . 95
Notices . . . . . . . . . . . . . . 101
Trademarks .
.
.
.
.
.
.
.
.
.
.
.
.
. 102
Index . . . . . . . . . . . . . . . 103
Figures
1.
2.
3.
4.
5.
6.
7.
8.
9.
Volume operations . . . . . . . . . .
The Redirect-on-Write process: the volume's
data and pointer . . . . . . . . . . .
The Redirect-on-Write process: when a
snapshot is taken the header is written first . .
The Redirect-on-Write process: the new data is
written . . . . . . . . . . . . . .
The Redirect-on-Write process: The snapshot
points at the old data where the volume points
at the new data . . . . . . . . . . .
The snapshot life cycle . . . . . . . . .
Restoring volumes . . . . . . . . . .
Restoring snapshots . . . . . . . . . .
The Consistency Group's lifecycle . . . . .
© Copyright IBM Corp. 2016
22
10.
24
11.
24
12.
13.
14.
15.
25
25
27
29
30
33
16.
17.
18.
19.
A snapshot is taken for each volume of the
Consistency Group . . . . . . . . .
Most snapshot operations can be applied to
snapshot groups . . . . . . . . . .
Synchronous remote mirroring scheme
Coupling states and actions . . . . . .
Synchronous remote mirroring concept
Asynchronous mirroring - no extended
response time lag . . . . . . . . .
Flow of the IBM Hyper-Scale Mobility . .
Data migration steps . . . . . . . .
XIV GUI – The Misc tab in XIV Settings
The way the system validates users through
issuing LDAP searches . . . . . . . .
. 34
. 35
40
. 49
51
. 52
. 58
. 65
72
. 82
v
vi
IBM Spectrum Accelerate: Product Overview
Tables
1.
2.
Synchronous mirroring statuses . . .
The IBM Hyper-Scale Mobility process
© Copyright IBM Corp. 2016
.
.
.
.
. 43
. 58
3.
4.
Available user roles . . . . . . .
Application administrator commands .
.
.
.
.
. 73
. 77
vii
viii
IBM Spectrum Accelerate: Product Overview
About this document
IBM® Spectrum Accelerate™ is a member of the IBM Spectrum Storage™ family of
software-defined storage products that allow enterprises to use their own server
and disk infrastructure for assembling, setting up, and running one or more
storage systems that incorporate the proven IBM XIV® storage technology.
Purpose and scope
This document provides a functional feature overview of IBM Spectrum
Accelerate™, a member of the IBM Spectrum Storage family of software-defined
storage solutions. Relevant tables, charts, graphic interfaces, sample outputs, and
appropriate examples are also provided.
Intended audience
This document is aimed for administrators, IT staff, and other professionals who
work or intend to work with Spectrum Accelerate.
Document conventions
These notices are used in this guide to highlight key information.
Note: These notices provide important tips, guidance, or advice.
Important: These notices provide information or advice that might help you avoid
inconvenient or difficult situations.
Attention: These notices indicate possible damage to programs, devices, or data.
An attention notice appears before the instruction or situation in which damage
can occur.
Related information and publications
You can find additional information and publications related to IBM Spectrum
Accelerate on the following information sources.
v IBM Spectrum Accelerate marketing portal (ibm.com/systems/storage/
spectrum/accelerate)
v IBM Spectrum Accelerate on IBM Knowledge Center (ibm.com/support/
knowledgecenter/STZSWD) – on which you can find the following related
publications:
– IBM Spectrum Accelerate – Release Notes
– IBM Spectrum Accelerate – Planning, Deployment, and Operation Guide
– IBM Spectrum Accelerate – Command-Line Interface (CLI) Reference Guide
– IBM XIV Management Tools – Release Notes
– IBM XIV Management Tools – Operations Guide
– Platform and application integration solutions for IBM Spectrum Accelerate –
See under 'Platform and application integration'
© Copyright IBM Corp. 2016
ix
v IBM XIV Storage System on IBM Knowledge Center (ibm.com/support/
knowledgecenter/STJTAG) – on which you can find the following related
publications:
– IBM XIV Management Tools – Release Notes
– IBM XIV Management Tools – Operations Guide
v VMware Documentation (vmware.com/support/pubs)
v VMware Knowledge Base (kb.vmware.com)
v VMware KB article on IBM Spectrum Accelerate (kb.vmware.com/kb/2111406)
Terms and abbreviations
A complete list of terms and abbreviations can be found in the “Glossary” on page
95.
IBM Publications Center
The IBM Publications Center is a worldwide central repository for IBM product
publications and marketing material.
The IBM Publications Center website (ibm.com/shop/publications/order) offers
customized search functions to help you find the publications that you need. You
can view or download publications at no charge.
Sending or posting your comments
Your feedback is important in helping to provide the most accurate and highest
quality information.
Procedure
To submit any comments about this guide:
v Go to IBM Spectrum Accelerate on IBM Knowledge Center (ibm.com®/support/
knowledgecenter/STZSWD), drill down to the relevant page, and then click the
Feedback link that is located at the bottom of the page.
The feedback form is displayed and you can use it to enter and submit your
comments privately.
v You can post a public comment on the Knowledge Center page that you are
viewing, by clicking Add Comment. For this option, you must first log in to
IBM Knowledge Center with your IBMid.
v You can send your comments by email to starpubs@us.ibm.com. Be sure to
include the following information:
– Exact publication title and product version
– Publication form number (for example: SC01-0001-01)
– Page, table, or illustration numbers that you are commenting on
x
IBM Spectrum Accelerate: Product Overview
– A detailed description of any information that should be changed
Note: When you send information to IBM, you grant IBM a nonexclusive right
to use or distribute the information in any way it believes appropriate without
incurring any obligation to you.
Getting information, help, and service
If you need help, service, technical assistance, or want more information about IBM
products, you can find various sources to assist you. You can view the following
websites to get information about IBM products and services and to find the latest
technical information and support.
v IBM website (ibm.com)
v IBM Support Portal website (www.ibm.com/storage/support)
v IBM Directory of Worldwide Contacts website (www.ibm.com/planetwide)
About this document
xi
xii
IBM Spectrum Accelerate: Product Overview
Chapter 1. Introduction
IBM Spectrum Accelerate™ is a key member of the IBM Spectrum Storage portfolio.
It is a highly flexible storage solution that enables rapid deployment of block
storage services for new and traditional workloads, on-premises, off-premises and
in a combination of both.
Designed to help enable cloud environments, it is based on the proven technology
delivered in IBM storage systems. In addition to Spectrum Accelerate, the IBM
Spectrum Storage™ family of software-defined storage (SDS) products currently
includes the following software applications:
v Spectrum Virtualize
v Spectrum Scale
v Spectrum Control
v Spectrum Protect
v Spectrum Archive
For more information about the Spectrum Storage portfolio, go to
http://www.ibm.com/systems/storage/spectrum.
Spectrum Accelerate is provided as a software defined storage product for VMware
ESXi hypervisors and can be installed on 3–15 (minimum 3; maximum 15) physical
ESXi hosts (servers), which together comprise a single storage system. Spectrum
Accelerate pools server-attached storage into a consolidated hyper store. The
software leverages the same technology used by IBM storage systems, and features
similar storage system software running on qualified commodity hardware. This
solution provides the power of IBM storage systems on existing datacenter
resources, making it suitable for rapid deployment in a ‘build-your-own’ storage
infrastructure. The solution makes it possible to use any hardware for such
applications as development or test.
This software-defined storage system packages a major part of the capabilities that
make the Spectrum Accelerate system an outstanding solution for high-end
enterprise environments. In addition, Spectrum Accelerate features three of the
most beneficial aspects:
v Consistent high performance with optimization
v A simplified management experience due to an architecture that eliminates
many traditional planning, setup and maintenance chores
v Advanced features including snapshot, synchronous and asynchronous
replication, multi-tenancy, QoS, and support for open cloud standards.
Spectrum Accelerate runs as a virtual machine concurrently on several VMware
vSphere ESXi hypervisors, allowing the creation of a server-based storage area
network (SAN) from commodity hardware that includes x86-64 servers, Ethernet
switches, solid state drives (SSDs), and high-density disk drives. Running
alongside other virtual appliances on the same ESXi server, Spectrum Accelerate
works by efficiently grouping virtual nodes with the underlying physical disks and
spreading the data evenly across the nodes, creating a single, provisioning-ready
virtual array. It cost-effectively uses any standard data center network for both
inter-node and host connectivity.
© Copyright IBM Corp. 2016
1
Spectrum Accelerate supports any hardware configuration and components that
meet the minimal requirements, and requires no explicit hardware certification.
Scaling of nodes is linear and nondisruptive.
Each individual ESXi host with its single Spectrum Accelerate acts as a virtual
storage system module, which contains 6 to 12 physical disks that Spectrum
Accelerate uses. Each storage node, uses a10-Gigabit Ethernet (10 GigE)
interconnection with the other Spectrum Accelerate storage nodes to create unique
data distribution capabilities and other advanced features.
The ESXi hosts can be connected to a vCenter server, although it is not a
requirement. If a vCenter server is used, the Spectrum Accelerate storage system
and disk resources can be visually monitored through vSphere Client.
After the Spectrum Accelerate storage system is up and running, it can be used for
storage provisioning over iSCSI, and can be managed with the dedicated IBM
Management Tools (CLI or GUI) or through REST API.
Features and functionality
Spectrum Accelerate is characterized by an advanced set of storage capabilities and
features.
Performance
v Cache and disks in every module
v Extremely fast rebuild time in the event of disk failure
v Constant, predictable high performance that scales linearly with added
storage enclosures with zero tuning
v The use of flash media provides a superior cache hit ratio, as well as
extended cache across all volumes. This boosts performance while
saving the need to manage tiers
Agility
v Deployment of scale-out storage grids in automated environments in
minutes rather than days
v Seamless operation across delivery models—on commodity servers in
private cloud, with the optimized storage system, and on public cloud
infrastructure
v Ability to re-purpose servers at any time to improve utilization
Quality of Service (QoS)
v Ability to restrict the performance associated with selected tenants (in a
multi-tenant setting), storage pools, or hosts
v Ability to establish different performance tiers without a need for
physical tiering
v Sustainable high performance without any manual or
system-background tuning
Reliability
v Resilience during hardware failures, ability to continue functioning with
minimal performance impact
v Data mirroring guarantees that the data is always protected against
possible failure
v Fault tolerance, failure analysis, and self-healing algorithms
2
IBM Spectrum Accelerate: Product Overview
v No single-point-of-failure
Connectivity
v iSCSI interface
v Multiple host access
Multi-tenancy
v Allocation of storage resources to several independent administrators,
assuring that one administrator cannot access resources associated with
another administrator
v Isolation of tenants; storage domain administrators are not informed of
resources outside their storage domain
Hyper-Scale Manager
v Easy-to-use Graphical User Interface (GUI) management dashboard
based on the IBM Management Tool
v Compliance with any browser-enabled device, from desktops to iOS and
Android mobile devices
IBM Hyper-Scale Consistency
v Support of cross-system consistency
v Coordinated snapshots across independent Spectrum Accelerate and
storage systems
v Full data protection across multiple Spectrum Accelerate and storage
systems
Snapshots
v Innovative snapshot functionality, including support for practically
unlimited number of snapshots, snap-of-snap and restore-from-snap
Replication
v Synchronous and asynchronous replication of a volume (as well as a
consistency group) to a remote system
Ease of management
v Standard data management across the data center
v Tune-free, scaling enables management of large, dynamic storage
capacities with minimal overhead and training
v Non-disruptive maintenance and upgrades
v Management software with graphical user interface (GUI), the IBM
Hyper-Scale Manager, and a command-line interface (CLI)
v A mobile dashboard accessible from any browser-enabled device, from
desktops to iOS and Android mobile devices
v Notifications of events delivered through e-mail, SNMP, or SMS
messages
Hardware
For information on hardware requirements, consult the IBM Spectrum Accelerate
Planning, Deployment, and Operation Guide.
Management options
Spectrum Accelerate provides several management options.
Chapter 1. Introduction
3
GUI, CLI, and REST API and OpenStack management applications
Like other IBM Spectrum Storage offerings,Spectrum Accelerate includes an
intuitive, easy-to-use Graphical User Interface (GUI) management
dashboard, and integrates with IBM Spectrum Control for consolidated
management. The IBM Spectrum Accelerate GUI, called the IBM
Hyper-Scale Manager, can be run on any browser enabled device, from
desktops to iOS and Android mobile devices.
An advanced CLI management fully supports scripting and automation.
Web service APIs adhere to the Representational State Transfer (REST)
architecture.
OpenStack, open source software for creating public and private clouds.
SNMP
Third-party SNMP-based monitoring tools are supported using Spectrum
Accelerate MIB.
E-mail notifications
Spectrum Accelerate can notify users, applications or both through e-mail
messages regarding failures, configuration changes, and other important
information.
SMS notifications
Users can be notified through SMS of any system event.
Reliability
Spectrum Accelerate reliability features include data mirroring, spare storage
capacity, self-healing mechanisms, and data virtualization.
Data mirroring
Data arriving from the host for storage is temporarily placed in two separate
caches before it is permanently written to two disk drives located in separate
modules.
This guarantees that the data is always protected against possible failure of
individual modules, and this protection is in effect even before data has been
written to the nonvolatile disk media.
Self-healing mechanisms
Spectrum Accelerate includes built-in functions for self-healing to take care of
individual component malfunctions and to automatically restore full data
redundancy in the system within minutes.
Self-healing functions in Spectrum Accelerate increase the level of reliability of
your stored data. Automatic restoration of data redundancy after hardware
failures, class-leading rebuild speed and smart ‘call home’ support help ensure
reliability and performance at all times with minimal human effort.
Self-healing mechanisms are not just started in a reactive fashion following an
individual component malfunction, but also proactively - upon detection of
conditions indicating potential imminent failure of a component. Often, potential
problems are identified well before they might occur with the help of advanced
algorithms of preventive self-analysis that are continually running in the
background.
4
IBM Spectrum Accelerate: Product Overview
In all cases, self-healing mechanisms implemented in Spectrum Accelerate identify
all data portions in the system for which a second copy has been corrupted or is in
danger of being corrupted. Spectrum Accelerate creates a secure second copy out
of the existing copy, and stores it in the most appropriate part of the system.
Taking advantage of the full data virtualization, and based on the data distribution
schemes implemented in Spectrum Accelerate, such processes are completed with
minimal data migration.
As with all other processes in the system, the self-healing mechanisms are
completely transparent to the user, and the regular activity of responding to I/O
data requests is thoroughly maintained with no degradation to system
performance. Performance, load balance, and reliability are never compromised by
this activity.
Protected cache
Spectrum Accelerate cache writes are protected. Cache memory on a module is
protected with error correction coding (ECC).
All write requests are written to two separate cache modules before the host is
acknowledged. The data is later de-staged to disks.
Performance
Spectrum Accelerate is a high performance software-defined storage product
designed to help enterprises overcome storage challenges through an exceptional
mix of characteristics and capabilities.
Breakthrough architecture and design
The design of Spectrum Accelerate enables performance optimization
typically unattainable by traditional architectures. This optimization results
in superior utilization of system resources and automatic workload
distribution across all system hard drives. It also empowers administrators
to tap into the system’s rich set of built-in, advanced functionality such as
thin provisioning, mirroring and snapshots without adversely affecting
performance.
Consistent, predictable performance and scalability
Spectrum Accelerate can optimize load distribution across all disks for all
workloads, coupled with a powerful distributed cache implementation.
This facilitates high performance, that scales linearly with added storage
enclosures. Because this high performance is consistent—without the need
for manual tuning—users can enjoy the same high performance during the
typical peaks and troughs associated with volume and snapshot usage
patterns, even after a component failure.
Resilience and self-healing
Spectrum Accelerate maintains resilience during hardware failures,
continuing to function with minimal performance impact. Additionally, the
solution’s advanced self-healing capabilities allow it to withstand
additional hardware failures once it recovers from the initial failure.
Automatic optimization and management
Unlike traditional storage solutions, Spectrum Accelerate automatically
optimizes data distribution through hardware configuration changes, such
as component additions, replacements or failure. This helps eliminate the
need for manual tuning or optimization.
Chapter 1. Introduction
5
Functionality
Spectrum Accelerate functions include point-in-time copying, automatic
notifications, and ease of management.
Snapshot management
Spectrum Accelerate provides powerful snapshot mechanisms for creating
point-in-time copies of volumes.
The snapshot mechanisms include the following features:
v Differential snapshots, where only the data that differs between the source
volume and its snapshot consumes storage space
v Instant creation of a snapshot without any interruption of the application,
making the snapshot available immediately
v Writable snapshots, which can be used for a testing environment; storage space
is only required for actual data changes
v Snapshot of a writable snapshot can be taken
v High performance that is independent of the number of snapshots or volume
size
v The ability to restore from snapshot to volume or snapshot
Consistency groups for snapshots
Volumes can be put in a consistency group to facilitate the creation of consistent
point-in-time snapshots of all the volumes in a single operation.
This is essential for applications that use several volumes concurrently and need a
consistent snapshot of all these volumes at the same point in time.
Storage pools
Storage pools are used to administer the storage resources of volumes and
snapshots.
The storage space can be administratively portioned into storage pools to enable
the control of storage space consumption for specific applications or departments.
Remote monitoring and diagnostics
Spectrum Accelerate can email important system events to IBM Support.
This allows IBM to immediately detect hardware failures warranting immediate
attention and react swiftly (for example, dispatch service personnel). Additionally,
IBM support personnel can conduct remote support and generate diagnostics for
both maintenance and support purposes. All remote support is subject to customer
permission and remote support sessions are protected with a challenge response
security mechanism.
SNMP
Third-party SNMP-based monitoring tools are supported for the Spectrum
Accelerate MIB.
6
IBM Spectrum Accelerate: Product Overview
Multipathing
The parallel design underlying the activity of the Host Interface modules and the
full data virtualization achieved in the system implement thorough multipathing
access algorithms.
Thus, as the host connects to the system through several independent ports, each
volume can be accessed directly through any of the Host Interface modules, and
no interaction has to be established across the various modules of the Host
Interface array.
Automatic event notifications
The system can be set to automatically transmit appropriate alarm notification
messages through SNMP traps, or email messages.
The user can configure various triggers for sending events and various destinations
depending on the type and severity of the event. The system can also be
configured to send notifications until a user acknowledges their receipt.
Management through GUI and CLI
Spectrum Accelerate provides the user-friendly and intuitive XIV GUI application
and CLI commands to configure and monitor the system.
These feature the same comprehensive system management functionality as XIV,
encompassing hosts, volumes, consistency groups, storage pools, snapshots,
mirroring relationships, events, and more.
External replication mechanisms
External replication and mirroring mechanisms in Spectrum Accelerate are an
extension of the internal replication mechanisms and of the overall functionality of
the system.
These features provide protection against a site disaster to ensure production
continues. The mirroring can be performed over iSCSI connections, and the
host-to-storage protocol is independent of the mirroring protocol.
Support for solid-state drive (SSD) caching
Solid-state drive (SSD) caching, available as an option, provides up to four times
faster performance for application workloads, without the need for setup,
administration, or migration policies.
The SSD extended caching option adds from 500 through 800 GB read cache
capacity to each module. For example, adding 500 GB read cache capacity to each
module in a fully populated configuration (15 modules) creates a total of 7.5 TB.
Spectrum Accelerate manages the flash caching. There is nothing that the storage
administrator must configure. The storage administrator can enable or disable the
extended flash cache at the system level or on a per host volume level. The
software dynamically and uses the flash as an extended read cache to boost
application performance.
Flash caching with SSD provides a significant advantage when compared to
caching over tiering with SSDs. Tiering with SSDs limits caching of data sets to
specific applications, requires constant analysis and frequent writing from cache to
disk and could involve rebalancing of SSD resources to suit evolving workloads.
Chapter 1. Introduction
7
SSD caching, on the other hand, brings improved performance to all applications
served by the storage system without the planning complexities and resources
required by SSD tiering.
Finally, the Spectrum Accelerate SSD caching design provides administrators with
the flexibility to define the applications they would like to accelerate should they
wish to single out particular workloads. Although by default the cache is made
available to all applications, it may be easily restricted to select volumes if desired;
volumes containing logs, history data, large images or inactive data can be
excluded. Ultimately, this means that the SSD cache can store more dynamic data.
Upgradability
Spectrum Accelerate is available in a partial rack system comprised of as few as
three (3) modules, or as many as fifteen (15) modules per rack.
Partial rack systems may be upgraded by adding data and interface modules, up
to the maximum of fifteen (15) modules per rack.
The system supports a non-disruptive upgrade of the system, as well as hotfix
updates.
8
IBM Spectrum Accelerate: Product Overview
Chapter 2. Connectivity
This chapter describes the way the storage system connects internally and
externally.
IP and interface connectivity
Introduces various configuration options of the storage system.
Host system attachment
Introduces various topics regarding the way the storage system connects to
its hosts.
IP and Ethernet connectivity
The following topics provide a basic explanation of the various Ethernet ports and
IP interfaces that can be defined and various configurations that are possible
within the Spectrum Accelerate.
The Spectrum Accelerate IP connectivity provides:
v iSCSI services over IP or Ethernet networks
v Management communication
Ethernet ports
The following types of Ethernet ports are supported.
iSCSI service ports
These ports are used for iSCSI over IP or Ethernet services. A fully
equipped rack is configured with six Ethernet ports for iSCSI service.
These ports should connect to the user's IP network and provide
connectivity to the iSCSI hosts. The iSCSI ports can also accept
management connections.
Management ports
These ports are dedicated for CLI and GUI communications, as well as
being used for outgoing SNMP and SMTP connections. A fully equipped
rack contains three management ports.
Interconnect ports
These ports are used for intra-cluster communication. They are configured
when the system is first deployed. This connectivity is critical for the
functionality of the system.
Management connectivity
Management connectivity is used for the following functions.
v Spectrum Accelerate uses the XIV Management Tools with IBM Hyper-Scale
Manager - an advanced web-based graphical user interface (GUI) from which
one or more IBM Spectrum Accelerate™ Family system can be managed and
monitored in real time from a web browser. The management dashboard can be
run on any browser enabled device, from desktops to iOS and Android mobile
devices.
v Executing XIV CLI commands through the IBM XIV command-line interface
(XCLI)
v Sending e-mail notification messages and SNMP traps about event alerts
© Copyright IBM Corp. 2016
9
To ensure management redundancy in case of module failure, in addition to the
IBM Hyper-Scale Manager dashboard, Spectrum Accelerate supports management
functions that are accessible from three different IP addresses. Each of the three IP
addresses is handled by a different hardware module. The various IP addresses are
transparent to the user and management functions can be performed through any
of the IP addresses. These addresses can be accessed simultaneously by multiple
clients. Users only need to configure the IBM Hyper-Scale Manager or XCLI for the
set of IP addresses that are defined for the specific system. Spectrum Accelerate
also features on-the-go management through a special Mobile Dashboard that
works with Apple iOS and Android devices.
Note: All management IP interfaces must be connected to the same subnet and use
the same network mask, gateway, and MTU.
IBM Hyper-Scale Manager dashboard
Like other IBM Spectrum Storage offerings, IBM Spectrum™ Accelerate includes the
IBM Hyper-Scale Manager, which is based on the XIV Management Tool (GUI)
which can integrate with IBM Spectrum Control Base Edition (SCBE) for
consolidated management. The IBM Hyper-Scale Manager can be run on any
browser enabled device, from desktops to iOS and Android mobile devices to let
clients manage technical and administrative operations through a mobile
dashboard at the tap of a screen. In the era of real-time data management, mobile
management of storage can help reduce storage downtime, data overload,
over-provisioning and application disruption.
XCLI and IBM Hyper-Scale Manager system management
The Spectrum Accelerate management connectivity system allows users to manage
the system from both the XCLI and IBM Hyper-Scale Manager. Accordingly, the
XCLI and IBM Hyper-Scale Manager can be configured to manage the system
through iSCSI IP interfaces. Both XCLI and IBM Hyper-Scale Manager
management is run over TCP port 7778 with all traffic encrypted through the
Secure Sockets Layer (SSL) protocol.
System-initiated IP communication
IBM storage systems can also initiate IP communications to send event alerts as
necessary. Two types of system-initiated IP communications exist:
Sending e-mail notifications through the SMTP protocol
E-mails are used for both e-mail notifications and for SMS notifications
through the SMTP to SMS gateways.
Sending SNMP traps
Note: SMPT and SNMP communications can be initiated from any of the
three IP addresses. This is different from XCLI and IBM Hyper-Scale
Manager, which are user initiated. Accordingly, it is important to configure
all three IP interfaces and to verify that they have network connectivity.
Interconnect connectivity
Interconnect connectivity is used for all communication between system modules.
This includes:
v Data traffic
v Cluster monitoring
10
IBM Spectrum Accelerate: Product Overview
v Housekeeping operations
Host system attachment
Spectrum Accelerate attaches to hosts of various operating systems.
The Spectrum Accelerate system can be attached to hosts through a complementary
Host Attachment Kit (HAK) utilities. For more information, see 'Platform and
application integration'.
Note: The term host system attachment was previously known as host connectivity or
mapping.
Dynamic rate adaptation
Spectrum Accelerate provides a mechanism for handling insufficient bandwidth
and external connections for the mirroring process.
The mirroring process replicates a local site on a remote site (see the Chapter 6,
“Synchronous remote mirroring,” on page 39 and Chapter 7, “Asynchronous
remote mirroring,” on page 51 chapters later in this document). To accomplish this,
the process depends on the availability of bandwidth between the local and remote
storage systems.
The mirroring process sync rate attribute determines the bandwidth that is
required for a successful mirroring. Manually configuring this attribute, the user
takes into account the availability of bandwidth for the mirroring process, where
Spectrum Accelerate adjusts itself to the available bandwidth. Moreover, in some
cases the bandwidth is sufficient, but external IOs latency causes the mirroring
process to fall behind incoming IOs, thus to repeat replication jobs that were
already carried out, and eventually to under-utilize the available bandwidth even if
it was adequately allocated.
Spectrum Accelerate prevents IO time-outs through continuously measuring the IO
latency. Excess incoming IOs are pre-queued until they can be submitted. The
mirroring rate dynamically adapts to the number of pre-queued incoming IOs,
allowing for a smooth operation of the mirroring process.
Attaching volumes to hosts
While Spectrum Accelerate identifies volumes and snapshots by name, hosts
identify volumes and snapshots according to their logical unit number (LUN).
A LUN (logical unit number) is an integer that is used when attaching a system's
volume to a registered host. Each host can access some or all of the volumes and
snapshots on the storage system, up to a set maximum. Each accessed volume or
snapshot is identified by the host through a LUN.
For each host, a LUN identifies a single volume or snapshot. However, different
hosts can use the same LUN to access different volumes or snapshots.
Excluding LUN0
LUN0 cannot be used as a normal LUN.
Chapter 2. Connectivity
11
LUN0 can be mapped to a volume just like other LUNs. However, when no
volume is mapped to LUN0, the IBM XIV Host Attachment Kit (HAK) is using it
to discover the LUN array. Hence, we recommend not to use LUN0 as a normal
LUN.
Advanced host attachment
Spectrum Accelerate provides flexible host attachment options.
The following host attachment options are available:
v Definition of different volume mappings for different ports on the same host
v Support for hosts that have iSCSI ports.
CHAP authentication of iSCSI hosts
The MS-CHAP extension enables authentication of initiators (hosts) toSpectrum
Accelerate and vice versa in unsecured environments.
When CHAP support is enabled, hosts are securely authenticated by Spectrum
Accelerate. This increases overall system security by verifying that only
authenticated parties are involved in host-storage interactions.
Definitions
The following definitions apply to authentication procedures:
CHAP Challenge Handshake Authentication Protocol
CHAP authentication
An authentication process of an iSCSI initiator by a target through
comparing a secret hash that the initiator submits with a computed hash of
that initiator's secret which is stored on the target.
Initiator
The host.
Oneway (unidirectional CHAP)
CHAP authentication where initiators are authenticated by the target, but
not vice versa.
Supported configurations
CHAP authentication type
Oneway (unidirectional) authentication mode, meaning that the Initiator
(host) has to be authenticated by the Spectrum Accelerate.
MDS
CHAP authentication utilizes the MDS hashing algorithm.
Access scope
CHAP-authenticated Initiators are granted access to the Spectrum
Accelerate via mapping that may restrict access to some volumes.
Authentication modes
Spectrum Accelerate supports the following authentication modes:
None (default)
In this mode, an initiator is not authenticated by the Spectrum Accelerate.
12
IBM Spectrum Accelerate: Product Overview
CHAP (oneway)
In this mode, an initiator is authenticated by the Spectrum Accelerate
based on the pertinent initiator's submitted hash, which is compared to the
hash computed from the initiator's secret stored on the IBM XIV Storage
System.
Changing the authentication mode from None to CHAP requires an authentication
of the host. Changing the mode from CHAP to None doesn't require an
authentication.
Complying with RFC 3720
Spectrum Accelerate CHAP authentication complies with the CHAP requirements
as stated in RFC 3720. on the following Web site:http://tools.ietf.org/html/rfc3720
Secret length
The secret has to be between 96 bits and 128 bits; otherwise, the system
fails the command, responding that the requirements are not fulfilled.
Initiator secret uniqueness
Upon defining or updating an initiator (host) secret, the system compares
the entered secret's hash with existing secrets stored by the system and
determines whether the secret is unique. If it is not unique, the system
presents a warning to the user, but does not prevent the command from
completing successfully.
Clustering hosts into LUN maps
To enhance the management of hosts, Spectrum Accelerate allows clustering them
together, where the clustered hosts are provided with identical mappings.
The mapping of volumes to LUN identifiers is defined per cluster and applies to
all of the hosts in the cluster.
Adding a host to a cluster
Adding a host to a cluster is a straightforward action in which a host is
added to a cluster and is connected to a LUN:
v Changing the host's mapping to the cluster's mapping.
v Changing the cluster's mapping to be identical to the mapping of the
newly added host.
Removing a host from a cluster
The host is disbanded from the cluster, maintaining its connection to the
LUN:
v The host's mapping remains identical to the mapping of the cluster.
v The mapping definitions do not revert to the host's original mapping
(the mapping that was in effect before the host was added to the
cluster).
v The host's mapping can be changed.
Note:
v Spectrum Accelerate defines the same mapping to all of the hosts of the same
cluster. No hierarchy of clusters is maintained.
v Mapping a volume to a LUN that is already mapped to a volume.
v Mapping an already mapped volume to another LUN.
Chapter 2. Connectivity
13
Volume mappings exceptions
Spectrum Accelerate facilitates association of cluster mappings to a host that is
added to a cluster.
The system also facilitates easy specification of mapping exceptions for such host;
such exceptions are warranted to accommodate cases where a host must have a
mapping that is not defined for the cluster (e.g., Boot From SAN).
Mapping a volume to a host within a cluster
It is impossible to map a volume or a LUN that are already mapped.
For example, the host host1 belongs to the cluster cluster1 which has a
mapping for the volume vol1 to lun1:
1. Mapping host1 to vol1 and lun1 fails as both volume and LUN are
already mapped.
2. Mapping host1 to vol2 and lun1 fails as the LUN is already mapped.
3. Mapping host1 to vol1 and lun2 fails as the volume is already mapped.
4. Mapping host1 to vol2 and lun2 succeeds with a warning that the
mapping is host-sepcific.
Listing volumes that are mapped to a host/cluster
Mapped Hosts that are part of a Cluster are listed (that is, the list is at a
Host-level rather than Cluster-level).
Listing mappings
For each Host, the list indicates whether it belongs to a Cluster.
Adding a host to a cluster
Previous mappings of the Host are removed, reflecting the fact that the
only relevant mapping to the Host is the Cluster's.
Removing a host from a cluster
The Host regains its previous mappings.
Support for VMware extended operations
Spectrum Accelerate supports VMware extended operations (VMware vStorage
APIs).
The purpose of the VMware extended operations is to offload operations from the
VMware Server onto the storage system. Spectrum Accelerate supports the
following operations:
Full copy
The ability to copy data from one storage array to another without writing
to the ESXi server.
Block zeroing
Zeroing-out a block as a means for freeing it and make it available for
provisioning.
Hardware-assisted locking
Allowing for locking volumes within an atomic command.
Writing zeroes
The Write Zeroes command allows for zeroing large storage areas without sending
the zeroes themselves.
14
IBM Spectrum Accelerate: Product Overview
Whenever an new VM is created, the ESXi server creates a huge file full of zeroes
and sends it to the storage system. The Write Zeroes command is a way to tell a
storage controller to zero large storage areas without sending the zeroes. To meet
this goal, both VMware's generic driver and our own plug-in utilizes the WRITE
SAME 16 command.
This method differs from the former method where the host used to write and
send a huge file full of zeroes.
Note: The write zeroes operation is not a thin provisioning operation, as its
purpose is not to allocate storage space.
Hardware-assisted locking
The hardware-assisted locking feature utilizes VMware new Compare and Write
command for reading and writing the volume's metadata within a single operation.
Upon the replacement of SCSI2 reservations mechanism with Compare and Write
by VMware, the Spectrum Accelerate provides a faster way to change the metadata
specific file, along with eliminating the necessity to lock all of the files during the
metadata change.
The legacy VMware SCSI2 reservations mechanism is utilized whenever the VM
server performs a management operation, that is to handle the volume's metadata.
This method has several disadvantages, among them the mandatory overall lock of
access to all volumes, which implies that all other servers are refrained from
accessing their own files. In addition, the SCSI2 reservations mechanism entails
performing at least four SCSI operations (reserve, read, write, release) in order to
get the lock.
The introduction of the new SCSI command, called Compare and Write (SBC-3,
revision 22), results with a faster mechanism that is displayed to the volume as an
atomic action that does not require to lock any other volume.
Note: The Spectrum Accelerate supports single-block Compare and Write
commands only. This restriction is carried out in accordance with VMware.
Backwards compatibility
The Spectrum Accelerate maintains its compatibility with older ESX versions as
follows:
v Each volume is capable of connecting legacy hosts, as it still supports SCSI
reservations.
v Whenever a volume is blocked by the legacy SCSI reservations mechanism, it is
not available for an arriving COMPARE AND WRITE command.
v The Admin is expected to phase out legacy VM servers to fully benefit from the
performance improvement rendered by the hardware-assisted locking feature.
Fast copy
The Fast Copy functionality allows for VM cloning on the storage system without
going through the ESXi server.
The Fast copy functionality speeds up the VM cloning operation by copying data
inside the storage system, rather than issuing READ and WRITE requests from the
host. This implementation provide a great improvement in performance, since it
Chapter 2. Connectivity
15
saves host to storage system intra-storage system communication. Instead, the
functionality utilizes the huge bandwidth within the storage system.
QoS performance classes
Spectrum Accelerate allows the user to allocate more I/O rates for important
applications.
The QoS Performance Classes feature allows the user to restrict I/O for specified
hosts, pools or tenants, thereby maximizing performance for other applications that
are considered to be more important, through prioritizing their hosts—and without
incurring data movement. Each of the hosts that are connected to the storage
system is associated with a group. This group is attributed with a rate limitation.
This limitation attribute and the association of host with the group limit the I/O
rates of a specified host in the following way:
v Host rate limitation groups are independent of other forms of host grouping (i.e.
Clusters)
v The group can be associated with an unlimited number of hosts
v By default, the host is not associated with any host rate limiting group
Max bandwidth limit attribute
The host rate limitation group has a max bandwidth limit attribute, which is the
number of blocks per second. This number could be either:
v A value between min_rate_limit_bandwidth_blocks_per_sec and
max_rate_limit_bandwidth_blocks_per_sec (both are available from the storage
system's configuration).
v Zero (0) for unlimited bandwidth.
16
IBM Spectrum Accelerate: Product Overview
Chapter 3. Storage pools
Spectrum Accelerate partitions the storage space into storage pools, where each
volume belongs to a specific storage pool.
Storage pools provide the following benefits:
Improved management of storage space
Specific volumes can be grouped together in a storage pool. This enables
you to control the allocation of a specific storage space to a specific group
of volumes. This storage pool can serve a specific group of applications, or
the needs of a specific department.
Improved regulation of storage space
Snapshots can be automatically deleted when the storage capacity that is
allocated for snapshots is fully consumed. This automatic deletion is
performed independently on each storage pool. Therefore, when the size
limit of the storage pool is reached, only the snapshots that reside in the
affected storage pool are deleted. For more information, see “The snapshot
auto-delete priority” on page 26.
Facilitating thin provisioning
Thin provisioning is enabled by Storage Pools.
Storage pools as logical entities
A storage pool is a logical entity and is not associated with a specific disk or
module. All storage pools are equally spread over all disks and all modules in the
system.
As a result, there are no limitations on the size of storage pools or on the
associations between volumes and storage pools. For example:
v The size of a storage pool can be decreased, limited only by the space consumed
by the volumes and snapshots in that storage pool.
v Volumes can be moved between storage pools without any limitations, as long
as there is enough free space in the target storage pool.
Note: For the size of the storage pool, please refer to the Spectrum Accelerate data
sheet.
All of the above transactions are accounting transactions, and do not impose any
data copying from one disk drive to another. These transactions are completed
instantly.
For information on volumes and snapshots, go to Chapter 4, “Volumes and
snapshots,” on page 21.
Moving volumes between storage pools
For a volume to be moved to a specific storage pool, there must be enough room
for it to reside there. If a storage pool is not large enough, the storage pool must be
resized, or other volumes must be moved out to make room for the new volume.
© Copyright IBM Corp. 2016
17
A volume and all its snapshots always belong to the same storage pool. Moving a
volume between storage pools automatically moves all its snapshots together with
the volume.
Protecting snapshots on a storage pool level
Snapshots that participates in the mirroring process can be protected in case of
storage pool space depletion.
This is done by attributing both snapshots (or snapshot groups) and the storage
pool with a deletion priority. The snapshots are attributed with a deletion priority
between 0–4 and the storage pool is configured to disregard snapshots whose
priority is above a specific value. Snapshots with a lower delete priority (i.e.,
higher number) than the configured value might be deleted by the system
whenever the Pool space depletion mechanism implies so, thus protecting
snapshots with a priority equal or higher to this value.
Thin provisioning
Spectrum Accelerate supports thin provisioning, which provides the ability to
define logical volume sizes that are much larger than the physical capacity
installed on the system.
Physical capacity needs only to accommodate written data, while parts of the
volume that have never been written to do not consume physical space.
This chapter discusses:
v Volume hard and soft sizes
v System hard and soft sizes
v Pool hard and soft sizes
v Depletion of hard capacity
Volume hard and soft sizes
Without thin provisioning, the size of each volume is both seen by the hosts and
reserved on physical disks. Using thin provisioning, each volume is associated
with the following two sizes:
Hard volume size
This reflects the total size of volume areas that were written by hosts. The
hard volume size is not controlled directly by the user and depends only
on application behavior. It starts from zero at volume creation or
formatting and can reach the volume soft size when the entire volume has
been written. Resizing of the volume does not affect the hard volume size.
Soft volume size
This is the logical volume size that is defined during volume creation or
resizing operations. This is the size recognized by the hosts and is fully
configurable by the user. The soft volume size is the traditional volume
size used without thin provisioning.
18
IBM Spectrum Accelerate: Product Overview
System hard and soft size
Using thin provisioning, each Spectrum Accelerate is associated with a hard system
size and soft system size. Without thin provisioning, these two are equal to the
system's capacity. With thin provisioning, these concepts have the following
meaning:
Hard system size
This is the physical disk capacity that was installed. Obviously, the
system's hard capacity is an upper limit on the total hard capacity of all
the volumes. The system's hard capacity can only change by installing new
hardware components (disks and modules).
Soft system size
This is the total limit on the soft size of all volumes in the system. It can be
set to be larger than the hard system size, up to 79TB. The soft system size
is a purely logical limit, but should not be set to an arbitrary value. It must
be possible to upgrade the system's hard size to be equal to the soft size,
otherwise applications can run out of space. This requirement means that
enough floor space should be reserved for future system hardware
upgrades, and that the cooling and power infrastructure should be able to
support these upgrades. Because of the complexity of these issues, the
setting of the system's soft size can only be performed by Spectrum
Accelerate support.
Pool hard and soft sizes
The concept of storage pool is also extended to thin provisioning. When thin
provisioning is not used, storage pools are used to define capacity allocation for
volumes. The storage pools control if and which snapshots are deleted when there
is not enough space.
When thin provisioning is used, each storage pool has a soft pool size and a hard
pool size, which are defined and used as follows:
Hard pool size
This is the physical storage capacity allocated to volumes and snapshots in
the storage pool. The hard size of the storage pool limits the total of the
hard volume sizes of all volumes in the storage pool and the total of all
storage consumed by snapshots. Unlike volumes, the hard pool size is fully
configured by the user.
Soft pool size
This is the limit on the total soft sizes of all the volumes in the storage
pool. The soft pool size has no effect on snapshots.
Thin provisioning is managed for each storage pool independently. Each storage
pool has its own soft size and hard size. Resources are allocated to volumes within
this storage pool without any limitations imposed by other storage pools. This is a
natural extension of the snapshot deletion mechanism, which is applied even
without thin provisioning. Each storage pool has its own space, and snapshots
within each storage pool are deleted when the storage pool runs out of space
regardless of the situation in other storage pools.
The sum of all the soft sizes of all the storage pools is always the same as the
system's soft size and the same applies to the hard size.
Chapter 3. Storage pools
19
Storage pools provide a logical way to allocate storage resources per application or
per groups of applications. With thin provisioning, this feature can be used to
manage both the soft capacity and the hard capacity.
Depletion of hard capacity
Thin provisioning creates the potential risk of depleting the physical capacity. If a
specific system has a hard size that is smaller than the soft size, the system will
run out of capacity when applications write to all the storage space that is mapped
to hosts. In such situations, the system behaves as follows:
Snapshot deletion
Snapshots are deleted to provide more physical space for volumes. The
snapshot deletion is based on the deletion priority and creation time.
Volume locking
If all snapshots have been deleted and more physical capacity is still
required, all the volumes in the storage pool are locked and no write
commands are allowed. This halts any additional consumption of hard
capacity.
Note: Space that is allocated to volumes that is unused (that is, the difference
between the volume's soft and hard size) can be used by snapshots in the same
storage pool.
The thin provisioning implementation with Spectrum Accelerate manages space
allocation per storage pool. Therefore, one storage pool cannot affect another
storage pool. This scheme has the following advantages and disadvantages:
Storage pools are independent
Storage pools are independent in respect to the aspect of thin provisioning.
Thin provisioning volume locking on one storage pool does not create a
problem in another storage pool.
Space cannot be reused across storage pools
Even if a storage pool has free space, this free space is never reused for
another storage pool. This creates a situation where volumes are locked
due to the depletion of hard capacity in one storage pool, while there is
available capacity in another storage pool.
Important: If a storage pool runs out of hard capacity, all of its volumes are locked
to all write commands. Although write commands that overwrite existing data can
be technically serviced, they are blocked to ensure consistency.
20
IBM Spectrum Accelerate: Product Overview
Chapter 4. Volumes and snapshots
Volumes are the basic storage data units in Spectrum Accelerate.
Snapshots of volumes can be created, where a snapshot of a volume represents the
data on that volume at a specific point in time.
Volumes can also be grouped into larger sets called Consistency Groups and
Storage Pools.
The basic hierarchy may be described as follows:
v A volume can have multiple snapshots.
v A volume can be part of one and only one Consistency Group.
v A volume is always a part of one and only one Storage Pool.
v All volumes in a Consistency Group must belong to the same Storage Pool.
The following subsections deal with volumes and snapshots specifically.
The volume life cycle
The volume is the basic data container that is presented to the hosts as a logical
disk.
The term volume is sometimes used for an entity that is either a volume or a
snapshot. Hosts view volumes and snapshots through the same protocol.
Whenever required, the term master volume is used for a volume to clearly
distinguish volumes from snapshots.
Each volume has two configuration attributes: a name and a size. The volume
name is an alphanumeric string that is internal to Spectrum Accelerate and is used
to identify the volume to both the GUI and CLI commands. The volume name is
not related to the SCSI protocol. The volume size represents the number of blocks
in the volume that the host sees.
The volume can be managed by the following commands:
Create Defines the volume using the attributes you specify
Resize Changes the virtual capacity of the volume. For more information, see
“Thin provisioning” on page 18.
Copy
Copies the volume to an existing volume or to a new volume
Format
Clears the volume
Lock
Prevents hosts from writing to the volume
Unlock
Allows hosts to write to the volume
Rename
Changes the name of the volume, while maintaining all of the volumes
previously defined attributes
Delete Deletes the volume. See Instant Space Reclamation.
© Copyright IBM Corp. 2016
21
The following query commands list volumes:
Listing Volumes
This command lists details of all volumes, or a specific volume according
to a given volume or pool.
Finding a Volume Based on a SCSI Serial Number
This command prints the volume name according to its SCSI serial
number.
These commands are available when you use both the IBM XIV Storage
Management GUI and the IBM XIV command-line interface (XCLI). See the IBM
XIV Storage System XCLI User Manual for the commands that you can issue in the
XCLI.
Figure 1 shows the commands you can issue for volumes.
Figure 1. Volume operations
Support for Symantec Storage Foundation Thin Reclamation
Spectrum Accelerate supports Symantec's Storage Foundation Thin Reclamation
API.
Spectrum Accelerate features instant space reclamation functionality, enhancing the
existing Thin Provisioning capability. The new instant space reclamation function
allows users to optimize capacity utilization, thus saving costs, by allowing
supporting applications, to instantly regain unused file system space in
thin-provisioned volumes instantly.
Spectrum Accelerate is one of the first high-end storage systems to offer instant
space reclamation. The new, instant capability enables third party products
vendors, such as Symantec Thin Reclamation, to interlock with Spectrum
Accelerate such that any unused space is detected instantly and automatically, and
immediately reassigned to the general storage pool for reuse.
22
IBM Spectrum Accelerate: Product Overview
This enables integration with thin-provisioning-aware Veritas File System (VxFS)
by Symantec, which enables you to leverage the Spectrum Accelerate
thin-provisioning-awareness to attain higher savings in storage utilization.
For example, when data is deleted by the user, the system administrator can
initiate a reclamation process in which Spectrum Accelerate frees the non-utilized
blocks and where these blocks are reclaimed by the available pool of storage.
Instant space reclamation doesn't support space reclamation for the following
objects:
v Mirrored volumes
v Volumes that have snapshots
v Snapshots
Snapshots
A snapshot is a logical volume reflecting the contents of a given source volume at a
specific point-in-time.
Spectrum Accelerate uses advanced snapshot mechanisms to create a virtually
unlimited number of volume copies without impacting performance. Snapshot
taking and management are based on a mechanism of internal pointers that allow
the master volume and its snapshots to use a single copy of data for all portions
that have not been modified.
This approach, also known as Redirect-on-Write (ROW) is an improvement of the
more common Copy-on-Write (COW), which translates into a reduction of I/O
actions, and therefore storage usage.
With Spectrum Accelerate snapshots, no storage capacity is consumed by the
snapshot until the source volume (or the snapshot) is changed.
Redirect on write
Spectrum Accelerate uses the Redirect-on-Write (ROW) mechanism.
The following items are characteristics of using ROW when a write request is
directed to the master volume:
1. The data originally associated with the master volume remains in place.
2. The new data is written to a different location on the disk.
3. After the write request is completed and acknowledged, the original data is
associated with the snapshot and the newly written data is associated with the
master volume.
In contrast with the traditional copy-on-write method, with redirect-on-write the
actual data activity involved in taking the snapshot is drastically reduced.
Moreover, if the size of the data involved in the write request is equal to the
system's slot size, there is no need to copy any data at all. If the write request is
smaller than the system's slot size, there is still much less copying than with the
standard approach of Copy-on-Write.
In the following example of the Redirect-on-Write process, The volume is displayed
with its data and the pointer to this data.
Chapter 4. Volumes and snapshots
23
Figure 2. The Redirect-on-Write process: the volume's data and pointer
When a snapshot is taken, a new header is written first.
Figure 3. The Redirect-on-Write process: when a snapshot is taken the header is written first
The new data is written anywhere else on the disk, without the need to copy the
existing data.
24
IBM Spectrum Accelerate: Product Overview
Figure 4. The Redirect-on-Write process: the new data is written
The snapshot points at the old data where the volume points at the new data (the
data is regarded as new as it keep updating by I/Os).
Figure 5. The Redirect-on-Write process: The snapshot points at the old data where the
volume points at the new data
The metadata established at the beginning of the snapshot mechanism is
independent of the size of the volume to be copied. This approach allows the user
to achieve the following important goals:
Continuous backup
As snapshots are taken, backup copies of volumes are produced at
frequencies that resemble those of Continuous Data Protection (CDP). Instant
restoration of volumes to virtually any point in time is easily achieved in
case of logical data corruption at both the volume level and the file level.
Chapter 4. Volumes and snapshots
25
Productivity
The snapshot mechanism offers an instant and simple method for creating
short or long-term copies of a volume for data mining, testing, and
external backups.
Storage utilization
Spectrum Accelerate allocates space for volumes and their Snapshots in a way that
whenever a Snapshot is taken, additional space is actually needed only when the
volume is written into.
As long as there is no actual writing into the volume, the Snapshot does not need
actual space. However, some applications write into the volume whenever a
Snapshot is taken. This writing into the volume mandates immediate space
allocation for this new Snapshot. Hence, these applications use space less
efficiently than other applications.
The snapshot auto-delete priority
Snapshots are associated with an auto-delete priority to control the order in which
snapshots are automatically deleted.
Taking volume snapshots gradually fills up storage space according to the amount
of data that is modified in either the volume or its snapshots. To free up space
when the maximum storage capacity is reached, the system can refer to the
auto-delete priority to determine the order in which snapshots are deleted. If
snapshots have the same priority, the snapshot that was created first is deleted
first.
Snapshot name and association
A snapshot can either be taken of a source volume, or from a source snapshot.
The name of a snapshot is either automatically assigned by the system at creation
time or given as a parameter of the XCLI command that creates it. The snapshot's
auto-generated name is derived from its volume's name and a serial number. The
following are examples of snapshot names:
MASTERVOL.snapshot_XXXXX
NewDB-server2.snapshot_00597
Parameter
Description
Example
MASTERVOL
The name of the volume.
NewDB-server2
XXXXX
A five-digit, zero filled
snapshot number.
00597
The snapshot lifecycle
The roles of the snapshot determine its life cycle.
Figure 6 on page 27 shows the life cycle of a snapshot.
26
IBM Spectrum Accelerate: Product Overview
Figure 6. The snapshot life cycle
The following operations are applicable for the snapshot:
Create Creates the snapshot (a.k.a. taking a snapshot)
Restore
Copies the snapshot back onto the volume. The main snapshot
functionality is the capability to restore the volume.
Unlocking
Unlocks the snapshot to make it writable and sets the status to Modified.
Re-locking the unlocked snapshot disables further writing, but does not
change the status from Modified.
Duplicate
Duplicates the snapshot. Similar to the volume, which can be snapshotted
infinitely, the snapshot itself can be duplicated.
A snapshot of a snapshot
Creates a backup of a snapshot that was written into. Taking a snapshot of
a writable snapshot is similar to taking a snapshot of a volume.
Overwriting a snapshot
Overwrites a specific snapshot with the content of the volume.
Delete Deletes the snapshot.
Creating a snapshot
First, a snapshot of the volume is taken. The system creates a pointer to the
volume, hence the snapshot is considered to have been immediately created. This
Chapter 4. Volumes and snapshots
27
is an atomic procedure that is completed in a negligible amount of time. At this
point, all data portions that are associated with the volume are also associated with
the snapshot.
Later, when a request arrives to read a certain data portion from either the volume
or the snapshot, it reads from the same single, physical copy of that data.
Throughout the volume life cycle, the data associated with the volume is
continuously modified as part of the ongoing operation of the system. Whenever a
request to modify a data portion on the master volume arrives, a copy of the
original data is created and associated with the snapshot. Only then the volume is
modified. This way, the data originally associated with the volume at the time the
snapshot is taken is associated with the snapshot, effectively reflecting the way the
data was before the modification.
Locking and unlocking snapshots
Initially, a snapshot is created in a locked state, which prevents it from being
changed in any way related to data or size, and only enables the reading of its
contents. This is called an image or image snapshot and represents an exact replica of
the master volume when the snapshot was created.
A snapshot can be unlocked after it is created. The first time a snapshot is
unlocked, the system initiates an irreversible procedure that puts the snapshot in a
state where it acts like a regular volume with respect to all changing operations.
Specifically, it allows write requests to the snapshot. This state is immediately set
by the system and brands the snapshot with a permanent modified status, even if
no modifications were performed. A modified snapshot is no longer an image
snapshot.
An unlocked snapshot is recognized by the hosts as any other writable volume. It
is possible to change the content of unlocked snapshots, however, physical storage
space is consumed only for the changes. It is also possible to resize an unlocked
snapshot.
Master volumes can also be locked and unlocked. A locked master volume cannot
accept write commands from hosts. The size of locked volumes cannot be
modified.
Duplicating image snapshots
A user can create a new snapshot by duplicating an existing snapshot. The
duplicate is identical to the source snapshot. The new snapshot is associated with
the master volume of the existing snapshot, and appears as if it were taken at the
exact moment the source snapshot was taken. For image snapshots that have never
been unlocked, the duplicate is given the exact same creation date as the original
snapshot, rather than the duplication creation date.
With this feature, a user can create two or more identical copies of a snapshot for
backup purposes, and perform modification operations on one of them without
sacrificing the usage of the snapshot as an untouched backup of the master
volume, or the ability to restore from the snapshot.
A snapshot of a snapshot
When duplicating a snapshot that has been changed using the unlock feature, the
generated snapshot is actually a snapshot of a snapshot. The creation time of the
newly created snapshot is when the command was issued , and its content reflects
the contents of the source snapshot at the moment of creation.
28
IBM Spectrum Accelerate: Product Overview
After it is created, the new snapshot is viewed as another snapshot of the master
volume.
Restoring volumes and snapshots
The restoration operation provides the user with the ability to instantly recover the
data of a master volume from any of its locked snapshots.
Restoring volumes
A volume can be restored from any of its snapshots, locked and unlocked.
Performing the restoration replicates the selected snapshot onto the volume. As a
result of this operation, the master volume is an exact replica of the snapshot that
restored it. All other snapshots, old and new, are left unchanged and can be used
for further restore operations. A volume can even be restored from a snapshot that
has been written to. Figure 7 shows a volume being restored from three different
snapshots.
Figure 7. Restoring volumes
Restoring snapshots
The snapshot itself can also be restored from another snapshot. The restored
snapshot retains its name and other attributes. From the host perspective, this
restored snapshot is considered an instant replacement of all the snapshot content
with other content. Figure 8 on page 30 shows a snapshot being restored from two
Chapter 4. Volumes and snapshots
29
different snapshots.
Figure 8. Restoring snapshots
Full Volume Copy
Full Volume Copy overwrites an existing volume, and at the time of its creation it is
logically equivalent to the source volume.
After the copy is made, both volumes are independent of each other. Hosts can
write to either one of them without affecting the other. This is somewhat similar to
creating a writable (unlocked) snapshot, with the following differences and
similarities:
Creation time and availability
Both Full Volume Copy and creating a snapshot happen almost instantly.
Both the new snapshot and volume are immediately available to the host.
This is because at the time of creation, both the source and the destination
of the copy operation contain the exact same data and share the same
physical storage.
Singularity of the copy operation
Full Volume Copy is implemented as a single copy operation into an
existing volume, overriding its content and potentially its size. The existing
target of a volume copy can be mapped to a host. From the host
30
IBM Spectrum Accelerate: Product Overview
perspective, the content of the volume is changed within a single
transaction. In contrast, creating a new writable snapshot creates a new
object that has to be mapped to the host.
Space allocation
With Full Volume Copy, all the required space for the target volume is
reserved at the time of the copy. If the storage pool that contains the target
volume cannot allocate the required capacity, the operation fails and has no
effect. This is unlike writable snapshots, which are different in nature.
Taking snapshots and mirroring the copied volume
The target of the Full Volume Copy is a master volume. This master
volume can later be used as a source for taking a snapshot or creating a
mirror. However, at the time of the copy, neither snapshots nor remote
mirrors of the target volume are allowed.
Redirect-on-write implementation
With both Full Volume Copy and writable snapshots, while one volume is
being changed, a redirect-on-write operation will ensure a split so that the
other volume maintains the original data.
Performance
Unlike writable snapshots, with Full Volume Copy, the copying process is
performed in the background even if no I/O operations are performed.
Within a certain amount of time, the two volumes will use different copies
of the data, even though they contain the same logical content. This means
that the redirect-on-write overhead of writes occur only before the initial
copy is complete. After this initial copy, there is no additional overhead.
Availability
Full Volume Copy can be performed with source and target volumes in
different storage pools.
Snapshot and snapshot group format
This operation deletes the content of a snapshot - or a snapshot group - while
maintaining its mapping to the host.
The purpose of the formatting is to allow customers to backup their volumes via
snapshots, while maintaining the snapshot ID and the LUN ID. More than a single
snapshot can be formatted per volume.
Required reading
Some of the concepts this topic refers to are introduced in this chapter as well as in
a later chapter on this document. Consult the following reading list to get a grasp
regarding these topics.
Snapshots
“The snapshot lifecycle” on page 26
Snapshot groups
“The snapshot group life cycle” on page 35
Attaching a host
“Host system attachment” on page 11
The format operation results with the following
v The formatted snapshot is read-only
v The format operation has no impact on performance
Chapter 4. Volumes and snapshots
31
v The formatted snapshot does not consume space
v Reading from the formatted snapshot always returns zeroes
v It can be overridden
v It can be deleted
v Its deletion priority can be changed
Restrictions
No unlock
The formatted snapshot is read-only and can't be unlocked.
No volume restore
The volume that the formatted snapshot belongs to can't be restored from
it.
No restore from another snapshot
The formatted snapshot can't be restored from another snapshot.
No duplicating
The formatted snapshot can't be duplicated.
No re-format
The formatted snapshot can't be formatted again.
No volume copy
The formatted snapshot can't serve as a basis for volume copy.
No resize
The formatted snapshot can't be resized.
Use case
1. Create a snapshot for each LUN you would like to backup to, and mount it to
the host.
2. Configure the host to backup this LUN.
3. Format the snapshot.
4. Re-snap. The LUN ID, Snapshot ID and mapping are maintained.
Restrictions in relation to other Spectrum Accelerate operations
Snapshots of the following types can't be formatted:
Internal snapshot
Formatting an internal snapshot hampers the process it is part of, therefore
is forbidden.
Part of a sync job
Formatting a snapshot that is part of a sync job renders the sync job
meaningless, therefore is forbidden.
Part of a snapshot group
A snapshot that is part of a snapshot group can't be treated as an
individual snapshot.
Snapshot group restrictions
All snapshot format restrictions apply to the snapshot group format
operation.
32
IBM Spectrum Accelerate: Product Overview
Chapter 5. Consistency groups
Consistency groups can be used to take simultaneous snapshots of multiple
volumes, thus ensuring consistent copies of a group of volumes.
Creating a synchronized snapshot set is especially important for applications that
use multiple volumes concurrently. A typical example is a database application,
where the database and the transaction logs reside on different storage volumes,
but all of their snapshots must be taken at the same point in time.
This chapter contains the following sections:
Creating a consistency group
Consistency groups are created empty and volumes are added to them later on.
The consistency groups is an administrative unit of multiple volumes that
facilitates simultaneous snapshots of multiple volumes, mirroring of volume
groups, and administration of volume sets. Hyper-Scale Consistency - Cross system
consistency (or snapshot) groups enables a coordinated creation of snapshots for
inter-dependent consistency groups on multiple systems. This feature is available
only through the IBM Hyper-Scale Manager.
Figure 9. The Consistency Group's lifecycle
© Copyright IBM Corp. 2016
33
Taking a snapshot of a Consistency Group
Taking a snapshot for the entire Consistency Group means that a snapshot is taken
for each volume of the Consistency Group at the same point-in-time.
These snapshots are grouped together to represent the volumes of the Consistency
Group at a specific point in time.
Figure 10. A snapshot is taken for each volume of the Consistency Group
In Figure 10, a snapshot is taken for each of the Consistency Group's volumes in
the following order:
Time = t0
Prior to taking the snapshots, all volumes in the consistency group are
active and being read from and written to.
Time = t1
When the command to snapshot the consistency group is issued, I/O is
suspended .
Time = t2
Snapshots are taken at the same point in time.
Time = t3
I/O is resumed and the volumes continue their normal work.
34
IBM Spectrum Accelerate: Product Overview
Time = t4
After the snapshots are taken, the volumes resume active state and
continue to be read from and written to.
Most snapshot operations can be applied to each snapshot in a grouping, known as
a snapshot set. The following items are characteristics of a snapshot set:
v A snapshot set can be locked or unlocked. When you lock or unlock a snapshot
set, all snapshots in the set are locked or unlocked.
v A snapshot set can be duplicated.
v A snapshot set can be deleted. When a snapshot set is deleted, all snapshots in
the set are also deleted.
A snapshot set can be disbanded which makes all the snapshots in the set
independent snapshots that can be handled individually. The snapshot set itself is
deleted, but the individual snapshots are not.
The snapshot group life cycle
Most snapshot operations can be applied to snapshot groups, where the operation
affects every snapshot in the group.
Figure 11. Most snapshot operations can be applied to snapshot groups
Taking a snapshot group
Creates a snapshot group. .
Restoring consistency group from a snapshot group
The main purpose of the snapshot group is the ability to restore the entire
consistency group at once, ensuring that all volumes are synchronized to
the same point in time. .
Chapter 5. Consistency groups
35
Listing a snapshot group
This command lists snapshot groups with their consistency groups and the
time the snapshots were taken.
Note: All snapshots within a snapshot group are taken at the same time.
Lock and unlock
Similar to unlocking and locking an individual snapshot, the snapshot
group can be rendered writable, and then be written to. A snapshot group
that is unlocked cannot be further used for restoring the consistency group,
even if it is locked again.
The snapshot group can be locked again. At this stage, it cannot be used to
restore the master consistency group. In this situation, the snapshot group
functions like a consistency group of its own.
Overwrite
The snapshot group can be overwritten by another snapshot group.
Rename
The snapshot group can be renamed.
Restricted names
Do not prefix the snapshot group's name with any of the
following:
1. most_recent
2. last_replicated
Duplicate
The snapshot group can be duplicated, thus creating another snapshot
group for the same consistency group with the time stamp of the first
snapshot group.
Disbanding a snapshot group
The snapshots that comprise the snapshot group are each related to its
volume. Although the snapshot group can be rendered inappropriate for
restoring the consistency group, the snapshots that comprise it are still
attached to their volumes. Disbanding the snapshot group detaches all
snapshots from this snapshot group but maintains their individual
connections to their volumes. These individual snapshots cannot restore
the consistency group, but they can restore its volumes individually.
Changing the snapshot group deletion priority
Manually sets the deletion priority of the snapshot group.
Deleting the snapshot group
Deletes the snapshot group along with its snapshots.
Restoring a consistency group
Restoring a consistency group is a single action in which every volume that
belongs to the consistency group is restored from a corresponding snapshot that
belongs to an associated snapshot group.
Not only does the snapshot group have a matching snapshot for each of the
volumes, all of the snapshots have the same time stamp. This implies that the
restored consistency group contains a consistent picture of its volumes as they
were at a specific point in time.
36
IBM Spectrum Accelerate: Product Overview
Note: A consistency group can only be restored from a snapshot group that has a
snapshot for each of the volumes. If either the consistency group or the snapshot
group has changed after the snapshot group is taken, the restore action does not
work.
Chapter 5. Consistency groups
37
38
IBM Spectrum Accelerate: Product Overview
Chapter 6. Synchronous remote mirroring
Remote mirroring allows replication of data between two geographically remote
sites, allowing full data recovery from the remote site in different disaster
scenarios.
Remote mirroring can be used to replicate the data between two geographically
remote sites. The replication ensures uninterrupted business operation if there is a
total site failure.
The process of ensuring that both storage systems contain identical data at all
times is called remote mirroring. Remote mirroring can be established between two
remote storage systems to provide data protection for the following types of site
disasters:
Local site failure
When a disaster occurs at a certain site, the remote site takes over and
maintains full service to the hosts connected to the original site. The
mirroring is resumed after the failing site recovers.
Split-brain scenario
After a communication loss between the two sites, each site maintains full
service to the hosts. After the connection is resumed and the link (mirror)
is established, the sites complement each other's data to regain full
synchronization.
Synchronous and asynchronous remote mirroring
The two distinct methods of remote mirroring – synchronous and asynchronous –
are described in this chapter and in the following chapter. Throughout this chapter,
the term remote mirroring refers to synchronous remote mirroring, unless clearly
stated otherwise.
Remote mirroring basic concepts
Synchronous remote mirroring provides continuous availability of critical
information in the case of a disaster scenario.
A typical remote mirroring configuration involves the following two sites:
Primary site
The location of the primary storage system.
A local site that contains both the data and the active servers.
Servers may simultaneously perform primary or secondary roles with respect to
their hosts. As a result, a server at one site can be the primary storage system for a
specific application, while simultaneously being the secondary storage system for
another application.
Secondary site
The location of the secondary backup storage system.
A remote site that contains a copy of the data and standby servers.
Following a disaster at the primary site, the servers at the secondary site
become active and start using the copy of the data.
© Copyright IBM Corp. 2016
39
Master volume
The volume which is mirrored. The master volume is usually located at the
primary site.
Slave volume
The volume to which the master volume is mirrored. The slave volume is
usually located at the secondary site.
Synchronous remote mirroring is performed during each write operation. The
write operation issued by a host is applied to both the primary and the secondary
storage systems.
Figure 12. Synchronous remote mirroring scheme
Note: When using remote mirroring with Spectrum Accelerate, data is transferred
over the mirror connectivity in uncompressed format. The data is deduplicated and
compressed again after it reaches the remote system.
When a volume is mirrored, reading is performed from the master volume, while
writing is performed on both the master and the slave volumes, as previously
described.
Synchronous mirroring operation
Remote mirroring operations involve configuration, initialization, ongoing operation,
handling of communication failures, and role switching.
The following list describes the remote mirroring operations:
Configuration
Configuration is the act of defining master and slave volumes for a mirror
relation.
Initialization
Remote mirroring operations begin with a master volume that contains
data and a new slave volume. Next, data is copied from the master volume
to the slave volume. This process is called initialization. Initialization is
performed once in the lifetime of a remote mirroring coupling. After it is
successfully completed, both volumes are synchronized.
Ongoing operation
After the initialization process is complete, remote mirroring is activated.
During this activity, all data is written to the master volume and to the
slave volume. The write operation is complete after an acknowledgment is
40
IBM Spectrum Accelerate: Product Overview
received from the slave volume. At any point, the master and slave
volumes contain identical data except for any unacknowledged (pending)
writes.
Handling of communication failures
Communication between sites may break. In this case, the primary site
continues its function and updates the secondary site after communication
resumes. This process is called synchronization.
Role switching
When needed, a volume can change its role from master to slave or vice
versa, either as a result of a disaster at the primary site, maintenance
operations, or intentionally, to test the disaster recovery procedures.
Using snapshots in synchronous mirroring
The storage system uses snapshots to identify inconsistencies that may arise
between updates.
If the link between volumes is disrupted or if the mirroring is deactivated, the
master continues accepting host writes, but does not replicate the writes onto the
slave. After the mirroring is restored and activated, the system takes a snapshot of
the slave, which represents the data that is known to be mirrored. This snapshot is
called the last-consistent snapshot. Only then more recent writes to the master are
replicated to the slave.
The last-consistent snapshot is automatically deleted after the resynchronization is
complete for all mirrors on the same target. However, if the slave volume role is
changed to master during resynchronization, the last-consistent snapshot will not
be deleted.
Synchronous mirroring configuration and activation options
The remote mirroring configuration process involves configuring volumes and
volume pairs.
Volume configuration
The following concepts are to be configured for volumes and the relations between
them:
The volume role is the current function of the volume. The following volume roles
are available:
None
The volume is created using normal volume creation procedures and is not
mirrored.
Master
The volume is directly written to by the host.
Slave
A backup to the master volume.
Data can be read from the slave volume by a host. Data cannot be written
to the slave volume by any host.
Mixed configuration
In some cases, the volumes on a single storage system can be defined in a
mixed configuration. For example, a storage system can contain volumes
Chapter 6. Synchronous remote mirroring
41
whose role is defined as master, as well as volumes whose role is defined
as slave. In addition, some volumes might not be involved in a remote
mirroring coupling at all.
Configuration error
In some cases, configuration on both sides might be changed in a
non-compatible way. This is defined as a configuration error. For example,
switching the role of only one side when communication is down causes a
configuration error when connection resumes, because each side is
configured as a master or slave.
Coupling activation
When a pair of volumes point to each other, it is referred to as a coupling. In a
coupling relationship, two volumes, referred to as peers, participate in a remote
mirroring system with the slave peer serving as the backup for the master peer.
The coupling configuration is identical for both master volumes and slave
volumes.
Remote mirroring can be manually activated and deactivated per coupling. When
activated, the coupling is in Active mode. When deactivated, the coupling is in
Standby mode.
These modes have the following functions:
Active Remote mirroring is functioning and the data is replicated.
Standby
Remote mirroring is deactivated. The data is not replicated to the slave
volume.
Standby mode is used mainly when maintenance is performed on the
secondary site or during communication failures between the sites. In this
mode, the master volumes will not generate mirroring-failure alerts.
The coupling lifecycle has the following characteristics:
v When a coupling is created, it is always initially in Standby mode.
v Only a coupling in Standby mode can be deleted.
Supported network configurations
Synchronous mirroring supports the following network configurations:
v Either Fibre Channel (FC) or iSCSI connectivity can be used for replication,
regardless of the connectivity that is used by the host to access the master.
v The remote system must be defined in the remote target connectivity definitions.
v All the volumes that belong to the same consistency group must reside on the
same remote system.
v Master and slave volumes must have exactly the same size.
Synchronous mirroring statuses
The status of a synchronous remote mirroring volume depends on the
communication link and on the coupling between the master volume and the slave
volume.
42
IBM Spectrum Accelerate: Product Overview
The following table lists the different statuses of a synchronous remote mirroring
volume during remote mirroring operations.
Table 1. Synchronous mirroring statuses
Entity
Status type
Possible status values
Description
Link
Operational status
v Up
Specifies if the communications
link is up or down.
v Down
The link status of the master
volume is also the link status of
the slave volume.
Coupling
Operational status
v Operational
v Non-operational
Specifies if remote mirroring is
working.
To be operational, the link status
must be up and the coupling
must be activated. If the link is
down or if the remote mirroring
feature is in Standby mode, the
status is Non-operational.
Synchronization status
v Initialization
v Synchronized
v Unsynchronized
For detailed description of each
status, see "Synchronization
status" below.
v Consistent
v Inconsistent
Last-secondary timestamp
Point-in-time date
Timestamp for when the
secondary volume was last
synchronized.
Synchronization progress
Synchronization status
The relative portion of data
remaining to be synchronized
between the master and slave
volumes due to non-operational
coupling.
Secondary-locked
Boolean
If the slave volume is locked for
writing due to lack of space, the
Secondary-locked status is true. This
may occur during the
synchronization process, when
there is not enough space for the
last-consistent snapshot.
Otherwise, the Secondary-locked
status is false.
Configuration error
Boolean
If the configuration of the master
and slave is volumes is
inconsistent., the Configuration
error status is true.
Synchronization status
The synchronization status reflects the consistency of the data between the master
and slave volumes.
Because remote mirroring is for ensuring that the slave volume is an identical copy
of the master volume, this status indicates whether this objective is currently
attained.
Chapter 6. Synchronous remote mirroring
43
The possible synchronization statuses for the master volume are:
Initialization
The first step in remote mirroring is to create a copy of the data from the
master volume to the slave volume. During this step, the coupling status
remains Initialization.
Synchronized (master volume only)
This status indicates that all data that was written to the master volume
and acknowledged has also been written to the slave volume. Ideally, the
master and slave volumes should always be synchronized. This does not
imply that the two volumes are identical because at any time there might
be a limited amount of data that was written to one volume, but was not
yet acknowledged by the slave volume. These are also known as pending
writes.
Unsynchronized (master volume only)
After a volume has completed the Initialization stage and achieved the
Synchronized status, a volume can become unsynchronized. This occurs
when it is not known whether all the data that was written to the master
volume was also written to the slave volume. This status occurs in the
following cases:
v Communications link is down – As a result of the communication link
going down, some data might have been written to the master volume,
but was not yet replicated to the slave volume.
v Secondary system is down – This is similar to communication link
errors because in this state, the primary system is updated while the
secondary system is not.
v Remote mirroring is deactivated – As a result of the remote mirroring
deactivation, some data might have been written to the master volume
and not to the slave volume.
Consistent
The slave volume is an identical copy of the master volume.
Inconsistent
There is a discrepancy between the data on the master and slave volumes.
It is always possible to reestablish the Synchronized status when the link is
reestablished or the remote mirroring feature is reactivated, no matter what was
the reason for the Unsynchronized status.
Because all updates to the master volume that are not written to the slave volume
are recorded, these updates are written to the slave volume. The synchronization
status remains Unsynchronized from the time that the coupling is not operational
until the synchronization process is completed successfully.
Last-secondary timestamp
A timestamp is taken when the coupling between the master and slave volumes
becomes non-operational.
This time stamp specifies the last time that the slave volume was consistent with
the master volume. This status has no meaning if the coupling's synchronization
state is still Initialization.
44
IBM Spectrum Accelerate: Product Overview
For synchronized coupling, this timestamp specifies the current time. Most
importantly, for an unsynchronized coupling, this timestamp denotes the time
when the coupling became non-operational.
The timestamp is returned to current only after the coupling is operational and the
master and slave volumes are synchronized.
Synchronization progress
During the synchronization process, when the slave volumes are being updated
with previously written data, the volumes are given a dynamic synchronization
process status.
This status comprises the following sub-statuses:
Size to complete
The size of data that requires synchronization.
Part to synchronize
The size to synchronize divided by the maximum size-to-synchronize since
the last time the synchronization process started. For coupling
initialization, the size-to-synchronize is divided by the volume size.
Time to synchronize
Time estimation that is required to complete the synchronization process
and achieve synchronization, based on past rate.
Secondary-locked error status
When synchronization is in progress, there is a period in which the slave volume is
not consistent with the master volume. While in this state, the slave volume
maintains a last-consistent snapshot. Provided that every I/O operation requires a
copy-on-write partition, this may result in insufficient space and, consequently, in
the failure of I/O operations to the slave volume.
Whenever I/O operations to the slave volume fail due to insufficient space, all
couplings in the system are set to the Secondary-locked status and become
non-operational. The administrator is notified of a critical event, and can free space
on the system containing the slave volume.
Synchronous mirroring role switchover and role change
When role switchover occurs, the master volume becomes the slave volume, and
the slave volume becomes the master volume.
Role switching can occur when the synchronous remote mirroring function is
either operational or not operational, as described in the following sections.
Role switchover when remote mirroring is operational
When the remote mirroring function is operational, role switching between master
and slave volumes can be initiated from the management GUI or CLI.
There are two typical reasons for performing a switchover when communication
between the volumes exists:
Drills Drills can be performed on a regular basis to test the functioning of the
Chapter 6. Synchronous remote mirroring
45
secondary site. In a drill, an administrator simulates a disaster and tests
that all procedures are operating smoothly.
Scheduled maintenance
To perform maintenance at the primary site, switch operations to the
secondary site on the day before the maintenance. This can be done as a
preemptive measure when a primary site problem is known to occur.
The CLI command that performs the role switchover must be run on the master
volume. The switchover cannot be performed if the master and slave volumes are
not synchronized.
Role switchover when remote mirroring is not operational
A more complex situation for role switching is when there is no communication
between the two sites, either because of a network malfunction, or because the
primary site is no longer operational.
The CLI command for this scenario is mirror_change_role. Because there is no
communication between the two sites, the command should be issued on both sites
concurrently, or at least before communication resumes. Otherwise, the sites will
not be able to establish communication.
Switchover procedures differ depending on whether the master and slave volumes
are connected or not. As a general rule:
v When the coupling is deactivated, it is acceptable to change the role on one side
only, assuming that the other side will be changed as well before communication
resumes.
v If the coupling is activated, but is either unsynchronized or nonoperational due
to a link error, an administrator must either wait for the coupling to be
synchronized, or deactivate the coupling.
v On the slave volume, an administrator can change the role even if coupling is
active. It is assumed that the coupling will be deactivated on the master volume
and the role switch will be performed there as well in parallel. If not, a
configuration error occurs on the original master volume.
Switching secondary to primary
The role of the slave volume can be switched to master using the management
GUI or CLI. After this switchover, the following takes effect:
v The slave volume is now the master volume.
v The coupling has the status of unsynchronized.
v The coupling remains in Standby mode, meaning that the remote mirroring is
deactivated. This ensures an orderly activation when the role of the other site is
switched.
The new master volume starts to accept write commands from local hosts. Because
coupling is not active, in the same way as any master volume, it maintains a log of
which write operations should be sent to the slave when communication resumes.
Typically, after switching the slave to the master volume, an administrator also
switches the master to the slave volume, at least before communication resumes. If
both volumes are left with the same role, a configuration error occurs.
46
IBM Spectrum Accelerate: Product Overview
Switching primary to secondary
When coupling is inactive, the primary machine can switch roles. After such a
switch, the master volume becomes the slave.
Before switching roles, the master volume is inactive. Hence, it is in the
unsynchronized state, and it might contain data that has not been replicated. Such
data will be lost. When the master volume becomes slave, this data must be
discarded to match the data on the peer volume, which is now the new master
volume. In this case, an event is created, summarizing the size of the lost data.
Upon reestablishing the connection, the recovery volume (current slave, which was
the master) will update the remote volume (new master) with this uncommitted
data list to update, and it is the responsibility of the new master volume to
synchronize these lists to the local volume (new slave).
I/O operations in synchronous mirroring
I/O operations are performed on the master and slave volumes across various
configuration options.
I/O on the master volume
Read
All data is read from the primary (local) site regardless of whether the
system is synchronized.
Write
v If the coupling is operational, data is written to both the master and
slave volumes.
v If the coupling is non-operational, data is written to the master volume
only, and the master is aware that the slave is currently not
synchronized.
I/O on the slave volume
The LUN of a slave volume can be mapped to remote hosts. In this case, the slave
volume will be accessible to those remote hosts as Read-only.
These mappings are then used by remote hosts for master-slave role switchover.
When the slave volume becomes the master, hosts can write to it on the remote
site. When the master volume becomes a slave volume, it becomes Read-only and
can be updated only by data replicated from the new master volume.
Read
Data can be read from the slave volume like from any other volume.
Write
In an attempt to write on the slave volume, the host will receive a volume
read-only SCSI error.
Synchronization speed optimization
The storage system has two global parameters that limit the maximum rate used
for initial synchronization and for synchronization after non-operational coupling.
These limits are used to prevent a situation where synchronization uses too much
of the system or communication line resources, and hampers the host's I/O
performance.
Chapter 6. Synchronous remote mirroring
47
The values of these global parameters can be viewed by the user, but setting or
changing them should be performed by an IBM technical support representative.
Dynamic rate adaptation
The storage system provides a mechanism for handling insufficient bandwidth and
external connections whenever remote mirroring is used.
The mirroring process replicates data from one site to the other. To accomplish this,
the process depends on the availability of bandwidth between the local and remote
storage systems. The mirroring synchronization rate parameter determines the
bandwidth that is required for a successful mirroring.
You can request that an IBM technical support representative manually modify this
parameter. To define its value, the IBM technical support representative should
take into account the availability of bandwidth for the mirroring process, where
the storage system adjusts itself to the available bandwidth.
The storage system prevents I/O timeouts through continuously measuring the
I/O latency. Excessive incoming I/Os are queued until they can be submitted. The
mirroring rate dynamically adapts to the number of queued incoming I/Os,
allowing for a smooth operation of the mirroring process.
Implications on volume and snapshot management
When using sync mirroring, the default behavior of volumes and snapshots
changes in order to protect the mirroring operation, as follows:
v Renaming a volume changes the name of the last-consistent and most updated
snapshots.
v Deleting all snapshots does not delete the last-consistent and most updated
snapshots.
v Resizing a master volume automatically resizes its slave volume.
v A master volume cannot be resized when the link is down.
v Resizing, deleting, and formatting are not permitted on a slave volume.
v A master volume cannot be formatted. If a master volume must be formatted, an
administrator must first deactivate the mirroring, delete the mirroring, format
both the slave and master volumes, and then define the mirroring again.
v Slave or master volumes cannot be the target of a copy operation.
v Locking and unlocking are not permitted on a slave volume.
v The last-consistent and most updated snapshots cannot be unlocked.
v Deleting is not permitted on a master volume.
v Restoring from a snapshot is not permitted on a master volume.
v Restoring from a snapshot is not permitted on a slave volume.
v A snapshot cannot be created with the same name as the last-consistent or most
updated snapshot.
Coupling synchronization process
When a failure condition has been resolved, remote mirroring begins the process of
synchronizing the coupling. This process updates the slave volume with all the
changes that occurred while the coupling was not operational.
48
IBM Spectrum Accelerate: Product Overview
The following diagram shows the various coupling states, together with the actions
that are performed in each state.
Figure 13. Coupling states and actions
The following list describes each coupling state:
Initialization
The slave volume has a Synchronization status of Initialization. During this
state, data from the master volume is copied to the slave volume.
Synchronized
This is the working state of the coupling, where the data in the slave
volume is consistent with the data in the master volume.
Timestamp
When a link is down, or when a coupling is deactivated, a timestamp
needs to be taken. After the timestamp is taken, the state changes to
Timestamp, and stays so until the link is restored, or the coupling is
reactivated.
Unsynchronized
Remote mirroring is recovering from a communications failure or
deactivation. The master and slave volumes are being synchronized.
Coupling recovery
When remote mirroring recovers from a non-operational coupling, the following
actions take place:
v If the slave volume is in the Synchronized state, a last-consistent snapshot of the
slave volume is created and named with the string secondary-volume-timedate-consistent-state.
v The master volume updates the slave volume until it reaches the Synchronized
state.
Chapter 6. Synchronous remote mirroring
49
v When all couplings that mirror volumes between the same pair of systems are
synchronized, the master volume deletes the special snapshot.
Uncommitted data
For best-effort coupling, when the coupling is in Unsynchronized state, the system
must track which data in the master volume has been changed, so that these
changes can be committed to the slave when the coupling becomes operational
again.
The parts of the master volume that must be committed to the slave volume and
must be marked are called uncommitted data.
Constraints and limitations
The following constraints and limitations apply to the synchronization process:
v The size, part, or time-to-synchronize are relevant only if the synchronization
status is Unsynchronized.
v The last-secondary time stamp is only relevant if the coupling is Unsynchronized.
Synchronous mirroring of consistency groups
Mirroring can be applied to whole consistency groups.
The following restrictions apply:
v All volumes in a consistency group have the same role, either master, or slave
v All mirrors in a consistency group are between the same two systems
50
IBM Spectrum Accelerate: Product Overview
Chapter 7. Asynchronous remote mirroring
Asynchronous mirroring enables high availability of critical data by
asynchronously replicating data updates from a primary storage peer to a remote,
secondary peer.
The relative merits of asynchronous and synchronous mirroring are best illustrated
by examining them in the context of two critical objectives:
v Responsiveness of the storage system
v Currency of mirrored data
With synchronous mirroring, host writes are acknowledged by the storage system
only after being recorded on both peers in a mirroring relationship. This yields
high currency of mirrored data (both mirroring peers have the same data), yet
results in less than optimal system responsiveness because the local peer cannot
acknowledge the host write until the remote peer acknowledges it. This type of
process incurs latency that increases as the distance between peers increases, but
both peers are synchronized (first image below).
Asynchronous mirroring (second image below) is advantageous in situations that
warrant replication between distant sites because it eliminates the latency inherent
to synchronous mirroring, and might lower implementation costs. Careful planning
of asynchronous mirroring can minimize the currency gap between mirroring
peers, and can help realize better data availability and cost savings.
Note: The following images show storage systems that also represent IBM
Spectrum Accelerate deployments.
Figure 14. Synchronous remote mirroring concept
© Copyright IBM Corp. 2016
51
Figure 15. Asynchronous mirroring - no extended response time lag
Note: Synchronous mirroring is covered in Chapter 6, “Synchronous remote
mirroring,” on page 39.
Asynchronous mirroring highlights
The following are highlights of the Spectrum Accelerate asynchronous mirroring
capability.
Advanced snapshot-based technology
Spectrum Accelerate asynchronous mirroring is based on IBM snapshot
technology, which streamlines implementation while minimizing impact on
general system performance. The technology leverages functionality that
supports mirroring of complete systems, translating to hundreds or
thousands of mirrors. For a detailed description, see “Snapshot-based
technology in asynchronous mirroring” on page 53.
Mirroring of consistency groups
Spectrum Accelerate supports definition of mirrored consistency groups,
which is highly advantageous to enterprises, facilitating easy management
of replication for all volumes that belong to a single consistency group.
This enables streamlined restoration of consistent volume groups from a
remote site upon unavailability of the primary site.
Automatic and manual replication
Asynchronous mirrors can be assigned a user-configurable schedule for
automatic, interval-based replication of changes, or can be configured to
replicate changes upon issuance of a manual (or scripted) user command.
Automatic replication allows you to establish crash-consistent replicas,
whereas manual replication allows you to establish application-consistent
replicas, if required. You can combine both approaches, because you can
define mirrors with a scheduled replication and issue manual replication
jobs for these mirrors as needed.
Multiple RPOs (Recovery Point Objectives) and multiple schedules
Spectrum Accelerate asynchronous mirroring enables each mirror to be
specified a different RPO, rather than forcing a single RPO for all mirrors.
This can be used to prioritize replication of some mirrors over others,
potentially making it easier to accommodate application RPO requirements,
as well as bandwidth constraints.
Flexible and independent mirroring intervals
Spectrum Accelerate asynchronous mirroring supports schedules with
intervals ranging between 20 seconds and 12 hours. Moreover, intervals are
52
IBM Spectrum Accelerate: Product Overview
independent from the mirroring RPO. This enhances the ability to fine tune
replication to accommodate bandwidth constraints and different RPOs.
Flexible pool management
Spectrum Accelerate asynchronous mirroring enables the mirroring of
volumes and consistency groups that are stored in thin provisioned pools.
This applies to both mirroring peers.
Bi-directional mirroring
Spectrum Accelerate systems can host multiple mirror sources and targets
concurrently, supporting over a thousand mirrors per system. Furthermore,
any given Spectrum Accelerate can have mirroring relationships with
several other Spectrum Accelerate systems. This enables enormous
flexibility when setting mirroring configurations.
The number of systems with which the storage system can have mirroring
relationships is specified outside, in the Spectrum Accelerate Data Sheet.
Concurrent synchronous and asynchronous mirroring
The Spectrum Accelerate can concurrently run synchronous and
asynchronous mirrors.
Easy transition between peer roles
Spectrum Accelerate mirror peers can be easily changed between master
and slave.
Easy transition from independent volume mirrors into consistency group mirror
The Spectrum Accelerate allows for easy configuration of consistency
group mirrors, easy addition of mirrored volumes into a mirrored
consistency group, and easy removal of a volume from a mirrored
consistency group while preserving mirroring for such volume.
Control over synchronization rates per target
The asynchronous mirroring implementation enables administrators to
configure different system mirroring rates with each target system.
Comprehensive monitoring and events
Spectrum Accelerate systems generate events and monitor critical
asynchronous mirroring-related processes to produce important data that
can be used to assess the mirroring performance.
Easy automation via scripts
All asynchronous mirroring commands can be automated through scripts.
Snapshot-based technology in asynchronous mirroring
Spectrum Accelerate features an innovative snapshot-based technology for
asynchronous mirroring that facilitates concurrent mirrors with different recovery
objectives.
With Spectrum Accelerate asynchronous mirroring, write order on the master is not
preserved on the slave. As a result, a snapshot taken of the slave at any moment is
most likely inconsistent and therefore not valid. To ensure high availability of data
in the event of a failure or unavailability of the master, it is imperative to maintain
a consistent replica of the master that can ensure service continuity.
This is achieved through Spectrum Accelerate snapshots. Spectrum Accelerate
asynchronous mirroring employs snapshots to record the state of the master, and
calculates the difference between successive snapshots to determine the data that
needs be copied from the master to the slave as part of a corresponding replication
Chapter 7. Asynchronous remote mirroring
53
process. Upon completion of the replication process, a snapshot is taken of the
slave and reflects a valid replica of the master.
Below are select technological properties that explain how the snapshot-based
technology helps realize effective asynchronous mirroring:
v Spectrum Accelerate supports a practically unlimited number of snapshots,
which facilitates mirroring of complete systems with practically no limitation on
the number of mirrored volumes supported
v Spectrum Accelerate implements memory optimization techniques that further
maximize the performance attainable by minimizing disk access.
Disaster recovery scenarios in asynchronous mirroring
A disaster is a situation where one of the sites (either the master or the slave) fails,
or the communication between the master site and the slave site is lost.
Asynchronous mirroring attains synchronization between master and slave peers
through a recurring data replication process called a Sync Job. Running at
user-configurable schedules, the Sync Job takes the most recent snapshot of the
master and compares this snapshot with the last replicated snapshot on the slave.
The Sync Job then synchronizes the master data corresponding to these differences
with the slave. At the completion of a sync job, a new last replicated snapshot is
created both on the slave and on the master.
Disaster recovery scenarios handle cases in which one of the snapshots mentioned
above becomes unavailable. These cases are:
Unplanned service disruption
▌1▐ Failover
Unplanned service disruption starts with a failover to the slave.
The slave is promoted and becomes the new master, serving host
requests
▌2▐ Recovery
Next, whenever the master and link are restored, the replication is
set from the promoted slave (the new master) onto the demoted
master (the new slave).
▌Alternatively:▐ No recovery
If recovery is not possible, a new mirroring is established
on the slave. The original mirroring is deleted and a new
mirroring relationship is defined.
▌3▐ Failback
Following the recovery, the original mirroring configuration is
reestablished. The master maintains its role and replicates to the
slave.
Planned service disruption
▌1▐ Planned role switch
Planned service disruption starts with a coordinated demotion of
the master to the slave, while the slave is promoted to become the
new master. The promoted slave serves host requests, and
replicates to the demoted master. On the host side, the host is
disconnected from the demoted master and connected to the new
master.
54
IBM Spectrum Accelerate: Product Overview
▌2▐ Recovery
Next, whenever the master and link are restored, the replication is
set from the promoted slave (the new master) onto the demoted
master (the new slave).
▌2▐ Failback
Following the recovery, the original mirroring configuration is
reestablished. The master maintains its role and replicates to the
slave.
Testing
There are two ways to test the slave replica:
v Create a snapshot of an LRS snapshot on the slave. Then map a host to
it and verify the data.
v Disconnect the host from the master, switch roles, and connect the host
to the slave. This is a more realistic, but also a more disruptive test.
Note: Please contact IBM Support in case of disaster or for any testing of disaster
recovery, in order to get clear guidelines and to secure a successful test.
Chapter 7. Asynchronous remote mirroring
55
56
IBM Spectrum Accelerate: Product Overview
Chapter 8. Volume migration with IBM Hyper-Scale Mobility
IBM Hyper-Scale Mobility enables a non-disruptive migration of volumes from one
storage system to another.
IBM Hyper-Scale Mobility helps achieve data migration in the following scenarios:
v Migrating data out of an over-provisioned system.
v Migrating all the data from a system that will be decommissioned or
re-purposed.
v Migrating data to another storage system to achieve adequate (lower or higher)
performance, or to load-balance systems to ensure uniform performance.
v Migrating data to another storage system to load-balance capacity utilization.
The IBM Hyper-Scale Mobility process
This section walks you through the IBM Hyper-Scale Mobility process.
Hyper-Scale Mobility moves a volume from one system to another, while the host
is using the volume. To accomplish this, I/O paths are manipulated by the storage,
without involving host configuration, and the volume identity is cloned on the
target system. In addition, direct paths from the host to the target system need to
be established, and paths to the original host can finally be removed. Host I/Os
are not interrupted throughout the migration process.
The key stages of the IBM Hyper-Scale Mobility and the respective states of
volumes are depicted in Figure 16 on page 58 and explained in detail in Table 2 on
page 58.
For an in-depth practical guide to using IBM Hyper-Scale Mobility, see the
Redbooks publication IBM Hyper-Scale Mobility Overview and Usage.
© Copyright IBM Corp. 2016
57
Figure 16. Flow of the IBM Hyper-Scale Mobility
Table 2. The IBM Hyper-Scale Mobility process
Source and destination
volume states
Stage
Description
Setup
A volume is automatically created at
the destination storage system with the
same name as the source volume. The
relation between the source and
destination volumes is established.
The two volumes are not yet
synchronized.
Migration
New data is written to the source and
replicated to the destination.
Initializing - The content of
the source volume is copied to
the destination volume. The
two volumes are not yet
synchronized. This state is
similar to the Initializing state
of synchronous mirroring (see
“Synchronous mirroring
statuses” on page 42). As long
as the source instance cannot
confirm that all of the writes
were acknowledged by the
destination volume, the state
remains Initializing.
58
IBM Spectrum Accelerate: Product Overview
Table 2. The IBM Hyper-Scale Mobility process (continued)
Source and destination
volume states
Stage
Description
Proxy-Ready
The replication of the source volume
data is complete when the destination is
synchronized. The source serves host
writes as a proxy between the host and
the destination.
The system administrator issues a
command that moves the IBM
Hyper-Scale Mobility relation to the
proxy.
Synchronized - The source
was wholly copied to the
destination. This state is
similar to the Synchronized
state of synchronous mirroring
(see “Synchronous mirroring
statuses” on page 42).
Next, the system administrator maps
the host to the destination. In this state,
a single copy of the data exists on the
destination and any I/O directed to the
source is redirected to the destination.
Proxy
New data in written to the source and
is migrated to the destination. The
proxy serves host requests as if it were
the target, but it actually impersonates
the target.
Cleanup
After validating that the host has
connectivity to the destination volume
through the new paths, the storage
administrator unmaps the source
volume on the source storage system
from the host.
Proxy - The source acts as a
proxy to the destination.
Then the storage administrator ends the
proxy and deletes the relationship.
Chapter 8. Volume migration with IBM Hyper-Scale Mobility
59
60
IBM Spectrum Accelerate: Product Overview
Chapter 9. Data-at-rest encryption
The IBM Spectrum Accelerate utilizes full disk encryption for regulation
compliance and security audit readiness.
Data-at-rest encryption protects against the potential exposure of storage system
sensitive data on discarded or stolen media. The encryption ensures that the data
cannot be read, as long as its encryption key is secured. This feature complements
physical security at the customer site, protecting the customer from unauthorized
access to the data.
The encryption of the disk drives is transparent to hosts that are attached to the
storage system, and does not affect either their management or performance.
The IBM Spectrum Accelerate data-at-rest encryption design is TCG-compliant.
Consequently, SCSI security protocol in/out commands are directly issued to
TCG-compliant SED drives. While no known HBA (host bus adapter) is supposed
to block such commands, certain RAID controllers do this by design, thus
disabling the IBM Spectrum Accelerate encryption altogether.
The SSDs used as flash cache are also encrypted with software-based encryption.
HIPAA compatibility
IBM Spectrum Accelerate complies with the following security requirements and
standards.
The IBM Spectrum Accelerate data-at-rest encryption complies with HIPAA Federal
requirements as follows:
v User data is inaccessible without XIV system specific keying material.
v Physical separation of encryption keys from encrypted data, by using an external
key server
v Cryptographic keys may be replaced at the user’s initiative
v All keys stored must be wrapped and stored in ciphertext (not reside in
plaintext or hidden/obfuscated)
v AES 256 encryption is used to wrap keys and encrypt data, RSA 2048 encryption
is used for public key cryptography
v Key exchanges are performed securely over encrypted interconnect traffic, using
AES 256 encryption
v Encryption configuration and settings must be auditable, thus the related
information and notifications should be kept in events log.
© Copyright IBM Corp. 2016
61
62
IBM Spectrum Accelerate: Product Overview
Chapter 10. Data migration
The use of any new storage system frequently requires the transfer of large
amounts of data from the previous storage system to the new storage system.
This can require many hours or even days; usually an amount of time that most
enterprises cannot afford to be without a working system. The data migration
feature enables production to be maintained while data transfer is in progress.
Given the nature of the data migration process, it is recommended that you consult
and rely on the IBM Spectrum Accelerate support team when planning a data
migration.
The data migration feature enables the smooth transition of a host working with
the previous storage system to a Spectrum Accelerate by:
v Immediately connecting the host to the Spectrum Accelerate storgae system and
providing the host with direct access to the most up-to-date data even before
data has been copied from the previous storage system.
v Synchronizing the data from the previous storage system by transparently
copying the contents of the previous storage system to the new storage system
as a background process.
During data migration, the host is connected directly to the Spectrum Accelerate
storage system and is disconnected from the previous storage system. Spectrum
Accelerate is connected to the previous storage system.. The new storage system
and the previous storage system must remain connected, until both storage
systems are synchronized and data migration is completed. The previous storage
system perceives the new storage system as a host, reading from and optionally
writing to the volume that is being migrated. The host reads and writes data to the
new storage system, while the new storage system might need to read or write the
data to the previous storage system to serve the command of the host.
The communication between the host and Spectrum Accelerate and the
communication between Spectrum Accelerate and the previous storage system is
iSCSI.
I/O handling in data migration
I/Os are handled per read and write requests.
Serving read requests
Spectrum Accelerate serves all the host's data read requests in a transparent
manner without requiring any action by the host, as follows:
v If the requested data has already been copied to the new storage system, it is
served from the new storage system.
v If the requested data has not yet been copied to the new storage system,
Spectrum Accelerate retrieves it from the previous storage system and then
serves it to the host.
© Copyright IBM Corp. 2016
63
Serving write requests
Spectrum Accelerate serves all host's data write requests in a transparent manner
without requiring any action by the host.
Data migration provides the following two alternative Spectrum Accelerate
configurations for handling write requests from a host:
Source updating:
A host's write requests are written by Spectrum Accelerate to the new
storage system, as well as to the previous storage system. In this case, the
previous storage system remains completely updated during the
background copying process. Throughout the process, the volume of the
previous storage system and the volume of the new storage system are
identical.
Write commands are performed synchronously, so Spectrum Accelerate
only acknowledges the write operation after writing to the new storage,
writing to the previous storage system, and receiving an acknowledgement
from the previous storage system. Furthermore, if, due to a communication
error or any other error, the writing to the previous storage system fails,
Spectrum Accelerate reports to the host that the write operation has failed.
No source updating:
A host's write requests are only written by Spectrum Accelerate to the new
storage system and are not written to the previous storage system. In this
case, the previous storage system is not updated during the background
copying process, and therefore the two storage systems will never be
synchronized. The volume of the previous storage system will remain
intact and will not be changed throughout the data migration process.
Data migration stages
Data migration includes the following stages.
Figure 17 on page 65 describes the process of migrating a volume from a previous
storage system to the new storage system. It also shows how the Spectrum
Accelerate synchronizes its data with the previous storage system, and how it
handles the data requests of a host throughout all these stages of synchronization.
64
IBM Spectrum Accelerate: Product Overview
Figure 17. Data migration steps
Initial configuration
The new storage system volume must be formatted before data migration can
begin. The new storage must be connected as a host to the previous storage system
whose data it will be serving.
The volume on the previous storage system and the volume on the new storage
system must have an equal number of blocks. This is verified upon activation of
the data migration process.
You can then initiate data migration and configure all hosts to work directly and
solely with the Spectrum Accelerate.
Data migration is defined through the dm_define command.
Testing the data migration configuration
Before connecting the host to the new storage system, use the dm_test CLI
command to test the data migration definitions to verify that the Spectrum
Accelerate can access the previous storage system.
Activating data migration
After you have tested the connection between the new storage system and the
previous storage system, activate data migration using the dm_activate CLI
command and connect the host to Spectrum Accelerate. From this point forward,
the host reads and writes data to the new storage system, and the Spectrum
Accelerate will read and optionally write to the previous storage system.
Chapter 10. Data migration
65
Data migration can be deactivated using the dm_deactivate CLI command. It can
then be activated again. While the data migration is deactivated, the volume
cannot be accessed by hosts (neither read nor write access).
Background copying and serving I/O operations
Once data migration is initiated, it will start a background process of sequentially
copying all the data from the previous storage system to the new storage system.
Synchronization is achieved
After all of a volume's data has been copied, the data migration achieves
synchronization. After synchronization is achieved, all read requests are served
from the Spectrum Accelerate.
If source updating was set to Yes, Spectrum Accelerate will continue to write data
to both itself and the previous storage system until data migration settings are
deleted.
Deleting data migration
Data migration is stopped by using a delete command. It cannot be restarted.
Handling failures
Upon a communication error or the failure of the previous storage system,
Spectrum Accelerate stops serving I/O operations to hosts, including both read
and write requests.
If Spectrum Accelerate encounters a media error on the previous storage system
(meaning that the it cannot read a block on the previous storage system), then
Spectrum Accelerate reflects this state on its own storage system (meaning that it
marks this same block and an error on its own storage system). The state of this
block indicates a media error even though the disk in the new storage system has
not failed.
66
IBM Spectrum Accelerate: Product Overview
Chapter 11. Event handling
Spectrum Accelerate monitors the health, the configuration changes, and the
activity of your storage systems, and generates system events.
These events are accumulated by the system and can help the user in the following
two ways:
v Users can view past events using various filters. This is useful for
troubleshooting and problem isolation.
v Users can configure the system to send one or more notifications, which are
triggered upon the occurrence of specific events. These notifications can be
filtered according to the events, severity and code. Notifications can be sent
through e-mail, SMS messages, or SNMP traps.
Event information
Events are created by various processes, including the following:
v Object creation or deletion, including volume, snapshot, map, host, and storage
pool
v Physical component events
v Network events
Each event contains the following information:
v A system-wide unique numeric identifier
v A code that identifies the type of the event
v Creation timestamp
v Severity
v Related system objects and components, such as volumes, disks, and modules
v Textual description
v Alert flag, where an event is classified as alerting by the event notification rules.
v Cleared flag, where alerting events can be either uncleared or cleared. This is
only relevant for alerting events.
Event information can be classified with one of the following severity levels:
Critical
Requires immediate attention
Major Requires attention soon
Minor Requires attention within the normal business working hours
Warning
Nonurgent attention is required to verify that there is no problem
Informational
Normal working procedure event
© Copyright IBM Corp. 2016
67
Viewing events
Spectrum Accelerate provides the following variety of criteria for displaying a list
of events:
v
v
v
v
Before timestamp
After timestamp
Code
Severity from a certain value and up
v Alerting events, meaning events that are sent repeatedly according to a snooze
timer
v Uncleared alerts
The number of displayed filtered events can be restricted.
Event notification rules
Spectrum Accelerate monitors the health, configuration changes, and activity of
your storage systems and sends notifications of system events as they occur.
Event notifications are sent according to the following rules:
Which events
The severity, event code, or both, of the events for which notification is
sent.
Where The destinations or destination groups to which notification is sent, such as
cellular phone numbers (SMS), e-mail addresses, and SNMP addresses.
Notifications are sent according to the following rules:
Destination
The destinations or destination groups to which a notification of an event
is sent.
Filter
A filter that specifies which events will trigger the sending of an event
notification. Notification can be filtered by event code, minimum severity
(from a certain severity and up), or both.
Alerting
To ensure that an event was indeed received, an event notification can be
sent repeatedly until it is cleared by a CLI command or the GUI. Such
events are called alerting events. Alerting events are events for which a
snooze time period is defined in minutes. This means that an alerting
event is resent repeatedly each snooze time interval until it is cleared. An
alerting event is uncleared when it is first triggered, and can be cleared by
the user. The cleared state does not imply that the problem has been
solved. It only implies that the event has been noted by the relevant
person who takes the responsibility for fixing the problem. There are two
schemes for repeating the notifications until the event is clear: snooze and
escalation.
Snooze
Events that match this rule send repeated notifications to the same
destinations at intervals specified by the snooze timer until they are
cleared.
Escalation
You can define an escalation rule and escalation timer, so that if events are
68
IBM Spectrum Accelerate: Product Overview
not cleared by the time that the timer expires, notifications are sent to the
predetermined destination. This enables the automatic sending of
notifications to a wider distribution list if the event has not been cleared.
Alerting events configuration limitations
The following limitations apply to the configuration of alerting rules:
v Rules cannot escalate to nonalerting rules, meaning to rules without escalation,
snooze, or both.
v Escalation time should not be defined as shorter than snooze time.
v Escalation rules must not create a loop (cycle escalation) by escalating to itself or
to another rule that escalates to it.
v The configuration of alerting rules cannot be changed while there are still
uncleared alerting events.
Defining destinations
Event notifications can be sent to one or more destinations, meaning to a specific
SMS cell number, e-mail address, or SNMP address, or to a destination group
comprised of multiple destinations.
Each of the following destinations must be defined as described:
SMS destination
An SMS destination is defined by specifying a phone number. When defining a
destination, the prefix and phone number should be separated because some SMS
gateways require special handling of the prefix.
By default, all SMS gateways can be used. A specific SMS destination can be
limited to be sent through only a subset of the SMS gateways.
E-mail destination
An e-mail destination is defined by an e-mail address. By default, all SMTP
gateways are used. A specific destination can be limited to be sent through only a
subset of the SMTP gateways.
SNMP managers
An SNMP manager destination is specified by the IP address of the SNMP
manager that is available to receive SNMP messages.
Destination groups
A destination group is simply a list of destinations to which event notifications can
be sent. A destination group can be comprised of SMS cell numbers, e-mail
addresses, SNMP addresses, or any combination of the three. A destination group
is useful when the same list of notifications is used for multiple rules.
Defining gateways
Event notifications can be sent by SMS, e-mail, or SNMP manager. This step
defines the gateways that will be used to send e-mail or SMS.
Chapter 11. Event handling
69
E-mail (SMTP) gateways
Several e-mail gateways can be defined to enable notification of events by e-mail.
By default, the Spectrum Accelerate attempts to send each e-mail notification
through the first available gateway according to the order that you specify.
Subsequent gateways are only attempted if the first attempted gateway returns an
error. A specific e-mail destination can also be defined to use only specific
gateways.
All event notifications sent by e-mail specify a sender whose address can be
configured. This sender address must be a valid address for the following two
reasons:
v Many SMTP gateways require a valid sender address or they will not forward
the e-mail.
v The sender address is used as the destination for error messages generated by
the SMTP gateways, such as an incorrect e-mail address or full e-mail mailbox.
E-mail-to-SMS gateways
SMS messages can be sent to cell phones through one of a list of e-mail-to-SMS
gateways. One or more gateways can be defined for each SMS destination.
Each such e-mail-to-SMS gateway can have its own SMTP server, use the global
SMTP server list, or both.
When an event notification is sent, one of the SMS gateways is used according to
the defined order. The first gateway is used, and subsequent gateways are only
tried if the first attempted gateway returns an error.
Each SMS gateway has its own definitions of how to encode the SMS message in
the e-mail message.
Monitoring Spectrum Accelerate using SNMP traps
Spectrum Accelerate supports third-party SNMP-based monitoring tools.
Simple Network Management Protocol (SNMP)
SNMP is a set of functions for monitoring and managing network devices. It
includes a protocol, a database specification, and a Management Information Base
(MIB). The MIB is a set of data objects that can be monitored by a network
management system.
The SNMP protocol defines two terms, agent and manager. An SNMP agent is a
device that reports information to SNMP managers. An SNMP manager, in its turn,
collects information from SNMP agents. The information is sent in SNMP
notifications, also referred to as traps.
You can define Spectrum Accelerate as an SNMP agent that sends notifications to
the SNMP manager. If a predefined monitored event occurs, Spectrum Accelerate
initiates the sending of an SNMP trap without waiting for a request from XIV. You
can also send SNMP get or walk commands to collect status information from
Spectrum Accelerate. To accomplish this task, you must use an SNMP manager
that supports this task and you need to import the XIV Storage System MIB into
that manager.
70
IBM Spectrum Accelerate: Product Overview
SNMP notifications
Six types of SNMP notifications are predefined in Spectrum Accelerate. Each type
corresponds to a specific severity:
v
v
v
v
DESCRIPTION
DESCRIPTION
DESCRIPTION
DESCRIPTION
"An event notification" ::= { xivEventTrap 1 }
"An informational event notification" ::= { xivEventTrap 2 }
"A warning event notification" ::= { xivEventTrap 3 }
"A minor event notification" ::= { xivEventTrap 4 }
v DESCRIPTION "A major event notification" ::= { xivEventTrap 5 }
v DESCRIPTION "A critical event notification" ::= { xivEventTrap 6 }
Management Information Base (MIB)
To display the system MIB file, issue the mib_get command.
In the Global Status category, MIB defines the following object IDs:
1.3.6.1.4.1.2021.77.1.1.1.1 xivMachineStatus
Shows if a disk rebuild or redistribution is occurring
1.3.6.1.4.1.2021.77.1.1.1.2 xivFailedDisks
The number of failed disks in the XIV
1.3.6.1.4.1.2021.77.1.1.1.3 xivUtilizationSoft
The percentage of total soft space that is allocated to pools
1.3.6.1.4.1.2021.77.1.1.1.4 xivUtilizationHard
The percentage of total hard space that is allocated to pools
1.3.6.1.4.1.2021.77.1.1.1.5 xivFreeSpaceSoft
The amount of soft space that is unallocated in GB
1.3.6.1.4.1.2021.77.1.1.1.6 xivFreeSpaceHard
The amount of hard space that is unallocated in GB
In the Interfaces category, MIB defines the following object IDs:
1.3.6.1.4.1.2021.77.1.1.2.1.1.2 xivIfIOPS
The number of IOPS being currently executed at the module
1.3.6.1.4.1.2021.77.1.1.2.1.1.3 xivIfStatus
The current status of the module
For SNMP notifications sent by Spectrum Accelerate, the MIB defines the following
object IDs in the Events category:
1.3.6.1.4.1.2021.77.1.3.1.1.1
1.3.6.1.4.1.2021.77.1.3.1.1.2
1.3.6.1.4.1.2021.77.1.3.1.1.3
1.3.6.1.4.1.2021.77.1.3.1.1.4
1.3.6.1.4.1.2021.77.1.3.1.1.5
1.3.6.1.4.1.2021.77.1.3.1.1.6
xivEventIndex A unique value for each event
xivEventCode The code of the event
xivEventTime The time of the event
xivEventDescription A description of the event
xivEventSeverity The severity of the event
xivEventTroubleshooting Troubleshooting information
In the Statistics category, MIB defines the following object IDs:
1.3.6.1.4.1.2021.77.1.4.1.1.2 xivStatisticsHostName
The name of the host that collects the statistics
1.3.6.1.4.1.2021.77.1.4.1.1.3 xivStatisticsHostIOPS
The number of input/output operations performed by the statistics host per second
In the Statistics Volume Table category, MIB defines the following object IDs:
Chapter 11. Event handling
71
1.3.6.1.4.1.2021.77.1.4.2.1.2
1.3.6.1.4.1.2021.77.1.4.2.1.3
1.3.6.1.4.1.2021.77.1.4.2.1.4
1.3.6.1.4.1.2021.77.1.4.2.1.5
xivStatisticsVolumeName The name of the statistics volume
xivStatisticsVolumeIOPS The number of IOPS per volume
xivStatisticsVolumeBW The number of BW objects per volume
xivStatisticsVolumeLatency The volume latency
Spectrum Accelerate SNMP setup
To use SNMP monitoring with Spectrum Accelerate, in the Settings > SNMP tab of
the XIV GUI define the standard SNMP parameters identical for all XIV machines.
Then, in the Settings > Misc tab define the only unique attribute for Spectrum
Accelerate: SDS = Yes:
Figure 18. XIV GUI – The Misc tab in XIV Settings
72
IBM Spectrum Accelerate: Product Overview
Chapter 12. Access control
Spectrum Accelerate features role-based authentication either natively or by using
LDAP-based authentication.
The system provides:
Role-based access control
Built-in roles for access flexibility and a high level of security according to
predefined roles and associated tasks.
Two methods of access authentication
Spectrum Accelerate supports the following methods of authenticating
users:
Native authentication
This is the default mode for authentication of users and groups on
Spectrum Accelerate. In this mode, users and groups are
authenticated against a database on the system.
LDAP When enabled, the system authenticates the users against an LDAP
repository.
User roles and permission levels
User roles allow specifying which roles are applied and the various applicable
limits.
Note: None of these system-defined users have access to data.
Table 3. Available user roles
© Copyright IBM Corp. 2016
User role
Permissions and limits
Typical usage
Read only
Read only users can only list and
view system information.
The system operator, typically, but
not exclusively, is responsible for
monitoring system status and
reporting and logging all
messages.
73
Table 3. Available user roles (continued)
User role
Permissions and limits
Application
administrator
Only application administrators
carry out the following tasks:
Typical usage
Application administrators
typically manage applications that
v Creating snapshots of assigned run on a particular server.
Application managers can be
volumes
defined as limited to specific
v Mapping their own snapshot to
volumes on the server. Typical
an assigned host
application administrator
v Deleting their own snapshot
functions:
v Managing backup
environments:
– Creating a snapshot for
backups
– Mapping a snapshot to back
up server
– Deleting a snapshot after
backup is complete
– Updating a snapshot for new
content within a volume
v Managing software testing
environment:
– Creating an application
instance
– Testing the new application
instance
Storage
administrator
The storage administrator has
permission to all functions,
except:
Storage administrators are
responsible for all administration
functions.
v Maintenance of physical
components or changing the
status of physical components
v Only the predefined
administrator, named admin,
can change the passwords of
other users
Operations
administrator
The operations administrator only Storage administrators are
has permission to perform
responsible for all maintenance
maintenance operations.
functions.
Technician
The technician is limited to the
following tasks:
v Physical system maintenance
v Phasing components in or out
of service
Technicians maintain the physical
components of the system. Only
one predefined technician is
specified per system.
Notes:
1. All users can view the status of physical components; however, only
technicians can modify the status of components.
2. User names are case-sensitive.
3. Passwords are case-sensitive.
74
IBM Spectrum Accelerate: Product Overview
Predefined users
There are several predefined users configured on Spectrum Accelerate.
These users cannot be deleted.
Storage administrator
This user id provides the highest level of customer access to the system.
Predefined user name: admin
Default password: adminadmin. The password can be changed, and the
user is strongly recommended to do so.
Technician
This user id is used only by Spectrum Accelerate service personnel. It has
full system access. It can be enabled or disabled using the
xiv_support_enable or xiv_support_disable command, respectively.
Predefined user name: technician
Default password: Password is predefined and is used only by the
Spectrum Accelerate technicians.
XIV development
This user id is used only by Spectrum Accelerate service personnel. It has
full system access. It can be enabled or disabled using the
xiv_support_enable or xiv_support_disable command, respectively.
Predefined user name: xiv_developer
Default password: Password is predefined and is used only by the
Spectrum Accelerate technicians.
XIV maintenance
This user id is used only by Spectrum Accelerate service personnel. It has
full system access. It can be enabled or disabled using the
xiv_support_enable or xiv_support_disable command, respectively.
Predefined user name: xiv_maintenance
Default password: Password is predefined and is used only by the
Spectrum Accelerate technicians.
XIV host profiler
This user id is used only by Host Attachment Kit, if enabled. It has a very
limited system access. It can be disabled using the host_profiler_disable
command.
Predefined user name: xiv_hostprofiler
HSA client
This user id is used only by the Host Side Accelerator service. It has a very
limited system access.
Predefined user name: hsa_client
Note: Predefined users are always authenticated by Spectrum Accelerate, even if
LDAP authentication has been activated for them.
Chapter 12. Access control
75
Application administrator
The primary task of the application administrator is to create and manage
snapshots.
Application administrators manage snapshots of a specific set of volumes. The user
group to which an application administrator belongs determines the set of volumes
which the application administrator is allowed to manage.
User groups
A user group is a group of application administrators who share the same set of
snapshot creation permissions.
This enables a simple update of the permissions of all the users in the user group
by a single command. The permissions are enforced by associating the user groups
with hosts or clusters. User groups have the following characteristics:
v Only users who are defined as application administrators can be assigned to a
group.
v A user can belong to only a single user group.
v A user group can contain up to eight users.
v If a user group is defined with access_all="yes", application administrators who
are members of that group can manage all volumes on the system.
Storage administrators create the user groups and control the various permissions
of the application administrators.
User group and host associations
Hosts and clusters can be associated with only a single user group.
When a user belongs to a user group that is associated with a host, it is possible to
manage snapshots of the volumes mapped to that host. User and host associations
have the following properties:
v User groups can be associated with both hosts and clusters. This enables limiting
application administrator access to specific volumes.
v A host that is part of a cluster cannot also be associated with a user group.
v When a host is added to a cluster, the associations of that host are broken.
Limitations on the management of volumes mapped to the host is controlled by
the association of the cluster.
v When a host is removed from a cluster, the associations of that host become the
associations of the cluster. This enables continued mapping of operations so that
all scripts will continue to work.
Listing hosts
The command host_list lists all groups associated with the specified host,
showing information about the following fields:
Range All hosts, specific host
Default
All hosts
Listing clusters
The command cluster_list lists all clusters that are associated with a user
group, showing information about the following fields:
Range All clusters, specific cluster
76
IBM Spectrum Accelerate: Product Overview
Default
All clusters
Command conditions
The application administrator has access only to several XCLI commands.
The application administrator can perform specific operations through a set of
commands. The Table 4 table lists the various commands that application
administrators can run according to association definitions and applicable
conditions.
If the application administrator is a member of a group that is defined with
access_all=yes, then it is possible to perform the command on all volumes.
Table 4. Application administrator commands
Relevant command
Conditions
cg_snapshot_create
This command is accessible for application
administrators if the following condition is met:
v At least one volume in the consistency group is
mapped to a host or cluster that is associated with an
application administrators user group.
map_vol
unmap_vol
Application administrators can use these commands to
map snapshots of volumes. The following condition
must be met:
1. The master volume is mapped to a host or cluster
that is associated with a user group that contains the
user.
vol_lock
snapshot_duplicate
snapshot_delete
snapshot_change_priority
These commands are accessible for application
administrators if the following conditions are both met:
snap_group_lock
snap_group_duplicate
snap_group_delete
snap_group_change_priority
These commands are accessible for application
administrators if the following conditions are both met:
1. The master volume is mapped to a host or cluster
that is associated with a user group that contains the
user.
1. At least one volume in the consistency group is
mapped to a host or cluster that is associated with
an application administrators user group.
2. The master volume is mapped to a host or cluster
that is associated with a user group that contains the
user.
snapshot_create
This command is accessible for application
administrators if the following condition is met:
1. The volume is mapped to a host or cluster that is
associated with a user group that contains the user.
2.
If the command overwrites a snapshot, the
overwritten snapshot must be previously created by
an application administrator.
Authentication methods
Spectrum Accelerate offers several methods for authentication.
The following authentication methods are available:
Chapter 12. Access control
77
Native (default)
The user is authenticated by Spectrum Accelerate based on the submitted
username and password, which are compared to user credentials defined
and stored on the Spectrum Accelerate system.
The user must be associated with a Spectrum Accelerate user role that
specifies pertinent system access rights.
This mode is set by default.
LDAP
The user is authenticated by an LDAP directory based on the submitted
username and password, which are used to connect with the LDAP server.
Predefined users authentication
The administrator and technician roles are always authenticated by
Spectrum Accelerate, regardless of the authentication mode. They are never
authenticated by LDAP.
Native authentication
This is the default mode for authentication of users and groups on the Spectrum
Accelerate.
In this mode, users and groups are authenticated against a database on the system.
User configuration
Configuring users requires defining the following options:
Role
Specifies the role category that each user has when operating the system.
The role category is mandatory. for explanations of each role.
Name Specifies the name of each user allowed to access the system.
Password
All user-definable passwords are case sensitive.
Passwords are mandatory, can be 6 to 12 characters long, use uppercase or
lowercase letters as well as the following characters: ~!@#$%^&*()_+={}|:;<>?,./\[] .
E-mail E-mail is used to notify specific users about events through e-mail
messages. E-mail addresses must follow standard addressing procedures.
E-mail is optional. Range: Any legal e-mail address.
Phone and area code
Phone numbers are used to send SMS messages to notify specific users
about events. Phone numbers and area codes can be a maximum of 63
digits, hyphens (-) and periods (.) Range: Any legal telephone number; The
default is N/A
LDAP authentication
Lightweight Directory Access Protocol (LDAP) support enables Spectrum
Accelerate to authenticate users through an LDAP repository.
When LDAP authentication is enabled, the username and password of a user
accessing Spectrum Accelerate (through CLI or GUI) are used by the IBM XIV
system to login into a specified LDAP directory. Upon a successful login, Spectrum
Accelerate retrieves the user's IBM XIV group membership data stored in the
LDAP directory, and uses that information to associate the user with an IBM XIV
administrative role.
78
IBM Spectrum Accelerate: Product Overview
The IBM XIV group membership data is stored in a customer defined,
pre-configured attribute on the LDAP directory. This attribute contains string
values which are associated with IBM XIV administrative roles. These values might
be LDAP Group Names, but this is not required by Spectrum Accelerate. The
values the attribute contains, and their association with IBM XIV administrative
roles, are also defined by the customer.
Supported domains
Spectrum Accelerate supports LDAP authentication of the following directories:
v Microsoft Active Directory
v SUN directory
v Open LDAP
LDAP multiple-domain implementation
In order to support multiple LDAP servers that span over different domains, and
in order to use the memberOf property, Spectrum Accelerate allows for more than
one role for the Storage Administrator and the Read⌂Only roles.
The predefined XIV administrative IDs “admin” and “technician” are always
authenticated by the IBM XIV Storage System, whether or not LDAP authentication
is enabled.
Responsibilities division between the LDAP directory and the
storage system
LDAP and the storage system divide responsibilities and maintained objects.
Following are responsibilities and data maintained by the IBM XIV system and the
LDAP directory:
LDAP directory
v Responsibilities - user authentication for IBM XIV users, and assignment
of IBM XIV related group in LDAP.
v Maintains - Users, username, password, designated IBM XIV related
LDAP groups associated with Spectrum Accelerate.
Spectrum Accelerate
v Responsibilities - Determination of appropriate user role by mapping
LDAP group to an IBM XIV role, and enforcement of IBM XIV user
system access.
v Maintains - mapping of LDAP group to IBM XIV role.
LDAP authentication process
The LDAP authentication process consists of several key steps.
In order to use LDAP authentication, carry out the following major steps:
1. Define an LDAP server and system parameters
2. Define an XIV user on this LDAP server. The storage system uses this user
when searching for authenticated users. This user is later on referred to as
system's configured service account.
3. Identify an LDAP attribute in which to store values that are associated with
IBM XIV user roles
Chapter 12. Access control
79
4. Define a mapping between values that are stored in the LDAP attribute and
IBM XIV user roles
5. Enable LDAP authentication
Once LDAP is configured and enabled, the predefined user is granted with login
credentials authenticated by the LDAP server, rather than the Spectrum Accelerate
itself.
Testing the authentication
The storage administrator can test the LDAP configuration before its activation by
issuing the ldap_test command (see “Access control commands” on page 84).
LDAP configuration scenario
The LDAP configuration scenario allows the storage administrator to enable LDAP
authentication.
Following is an overview of an LDAP configuration scenario:
1. Storage administrator defines the LDAP server(s) to the IBM XIV storage
system.
2. Storage administrator defines the LDAP base DN, communication, and timeout
parameters to the IBM XIV storage system.
3. Storage administrator defines the LDAP XIV group attribute to be used for
storing associations between LDAP groups and XIV storage administrator roles.
These are the storage administrator and readonly roles using the ldap_config_set
command.
4. Storage administrator defines the mapping between LDAP group name and
IBM XIV application administrator roles using the user_group_create
command.
5. Storage administrator enables LDAP authentication.
LDAP login scenario
Log into LDAP from within Spectrum Accelerate.
LDAP-authenticated login scenario takes the following course:
Initiation
If initiated from the GUI
1. User launches the Spectrum Accelerate GUI.
2. Spectrum Accelerate presents the user with a login screen.
3. User logs in submitting the required user credentials (e.g.,
username and password).
If initiated from the CLI
1. User logs into the CLI with user credentials (username and
password).
Authentication
1. Spectrum Accelerate attempts to log into LDAP directory using the
user-submitted credentials.
2. If login fails:
v Spectrum Accelerate attempts to log into the next LDAP server.
v If login fails again on all servers, a corresponding error message is
returned to the user.
80
IBM Spectrum Accelerate: Product Overview
3. If login succeeds, Spectrum Accelerate will determine the IBM XIV role
corresponding to the logged-in user, by retrieving the user-related
attributes from the LDAP directory. These attributes were previously
specified by the IBM XIV-to-LDAP mapping.
v Spectrum Accelerate will inspect whether the user role is allowed to
issue the CLI.
v If the CLI is permitted for the user's role, it will be issued against the
system, and any pertinent response will be presented to the user.
v If the CLI is not permitted for the user's role, Spectrum Accelerate
will send an error message to the user.
Supported user name characters
The login mechanism supports all characters, including @, * and \ to allow names
of the following format:
v UPN: name@domain
v NT domain: domain\name
Searching within indirectly-associated groups:
In addition to the users search, Spectrum Accelerate allows for searching
indirectly-associated Active Directory groups.
Searching for indirectly-associated Active Directory groups is done separately from
the user search that was described above. This search of indirectly-associated
group utilizes the group attribute memberof and it conveys the following flow.
Note: This search does not apply to SUN directory, as you get all the
indirectly-associated groups on the users validation query.
The Spectrum Accelerate search for the group membership starts with the groups
the user is directly associated with and spans to other groups. The memberof
attribute is searched for within each of these groups. The search goes on until one
of the following stop criteria is met:
Stop when found
v A group membership that matches one of the configured LDAP rules is
found
v The search command is set to stop searching upon finding a group.
Don't stop when found
v A group membership that matches one of the configured LDAP rules is
found
v The search command does not stop once a group membership is found.
It is set to continue onto the next group.
v The search command is set to stop upon reaching a search limit (see
Reaching a limit below).
Multiple findings
v More than a single group membership that matches one of the
configured LDAP rules were found
– Every match will be counted once even if it was found several times
(arrived at it from several branches).
Chapter 12. Access control
81
– The search doesn't avoid checking groups that were previously
checked from other branches.
Reaching a limit
One of the following limits is met (the limits are set as part of the search
command):
v The search reached the search depth limit.
This search attribute limits the span of the search operation within the
groups tree.
v The search reached the maximum number of queries limit.
User validation
Users are validated against LDAP.
During the login, the system validates the user as follows:
Figure 19. The way the system validates users through issuing LDAP searches
Issuing a user search
The system issues an LDAP search for the user's entered username.
The request is submitted on behalf of the system's configured service
account and the search is conducted for the LDAP server, base DN and
reference attribute as specified in the XIV LDAP configuration.
The base DN specified in the XIV LDAP configuration serves as a reference
starting point for the search – instructing LDAP to locate the value
submitted (the username) in the attribute specified (whose value is
specified in user_name_attrib).
If a single user is found - issuing an XIV role search
The system issues a second search request, this time submitted on
behalf of the user (with the user's credentials), and will search for
XIV roles associated with the user, based on XIV LDAP
configuration settings (as specified in parameter xiv_group_attrib).
If a single XIV role is found - permission is granted
The system inspects the rights associated with that role and
82
IBM Spectrum Accelerate: Product Overview
grant login to the user. The user's permissions are in
correspondence with the role associated by XIV, base on
XIV LDAP configuration.
If no XIV role is found for the user, or more than one role was
found If the response by LDAP indicates that the user is either
not associated with an XIV role (no user role name is
found in the referenced LDAP attribute for the user), or is
actually associated with more than a single role (multiple
roles names are found) – login will fail and a
corresponding message will be returned to the user.
If no such user was found, or more than one user were found
If LDAP returns no records (indicating no user with the username
was found) or more than a single record (indicating that the
username submitted is not unique), the login request fails and a
corresponding message is returned to the user.
Service account for LDAP queries
Spectrum Accelerate carries out the LDAP search through a service account. This
service account is established by using the ldap_config_set command (see here
“Access control commands” on page 84).
Switching between LDAP and native authentication modes
This section describes system behavior when switching between LDAP
authentication and native authentication.
After changing authentication modes from native to LDAP
The system will start authenticating users other than "admin" or "technician"
against the LDAP server, rather than the local Spectrum Accelerate storage system
user database. However, the local user account data is not deleted.
v Users without an account on the LDAP server is not granted access to the
Spectrum Accelerate system.
v Users with an LDAP account who are not associated with a Spectrum Accelerate
role on the LDAP directory are not granted access to the Spectrum Accelerate
system.
v Users with an LDAP account who are associated with a Spectrum Accelerate role
on the LDAP directory are granted access to the Spectrum Accelerate system if
the following conditions are met:
– The Spectrum Accelerate role on the LDAP server is mapped to a valid
Spectrum Accelerate role.
– The user is associated only to one Spectrum Accelerate role on the LDAP
server.
The following commands related to user account management will be disabled.
These operations must be performed on the LDAP directory.
v user_define
v user_rename
v user_update
v user_group_add_user
v user_group_remove_user
Note: When deleting a user group, even if the user group LDAP role does not
contain any users, the following completion code might appear:
Chapter 12. Access control
83
>> user_group_delete user_group=Appadmin
command 0:
administrator:
command:
code = "ARE_YOU_SURE_YOU_WANT_TO_DELETE_LDAP_USER_GROUP"
status = "3"
status_str = "One or more LDAP users might be associated to user group. Are you sure you want
warning = "yes"
aserver = "DELIVERY_SUCCESSFUL"
This might occur if users were associated with the specified user_group prior to
LDAP mode activation.
After changing authentication modes from LDAP to native
The system starts authenticating users against the locally defined user database.
Users and groups that were defined prior to switching from native to LDAP
authentication are re-enabled. The Spectrum Accelerate system allows local
management of users and groups.
The following commands related to user account management are enabled:
v user_define
v user_rename
v user_update
v user_group_add_user
v user_group_remove_user
Users must be defined locally and be associated with Spectrum Accelerate user
groups in order to gain access to the system.
Access control commands
The following CLI commands are available for managing role-based access control
(RBAC). For a detailed explanation of these commands, see the chapter detailing
access control commands in the relevant (for the release you are using) Spectrum
Accelerate Command-Line Interface (CLI) Reference Guide.
User-related commands
You can use the following user-related commands to manage role-based access
control:
user_define
Defines a new user.
user_update
Updates the attributes of the user.
user_list
Lists all users, or a specific user.
user_rename
Renames the user.
user_delete
Deletes the user.
84
IBM Spectrum Accelerate: Product Overview
User groups-related commands
You can also use the following user group-related commands to manage role-based
access control:
user_group_create
Creates a user group.
user_group_update
v Assigns the user group with a Lightweight Directory Access Protocol
(LDAP) role.
v Updates the user group name.
user_group_add_user
Adds a user to a user group.
user_group_remove_user
Removes a user from a user group.
user_group_list
Lists all user groups along with their users.
user_group_rename
Renames a user group.
user_group_delete
Deletes a user group.
Role-based access control commands
The following list of access-related commands can be used to manage role-based
access control:
access_define
Associates a user group with a host and a cluster.
access_delete
Dissociates a user group from the host and cluster with which it is associated.
access_list
Lists access associations.
Configuration-related commands
You can also use the following LDAP server configuration-related commands:
ldap_config_set
Sets up the LDAP configuration parameters.
ldap_config_get
Lists the configuration attributes of an LDAP server that works with the
storage system.
ldap_mode_set
Enables/disables LDAP authentication to the storage system.
ldap_mode_get
Returns the authentication mode of the storage system (active/inactive).
ldap_user_test
This command authenticates the user's credentials on the LDAP machine.
Chapter 12. Access control
85
ldap_test
Validates the LDAP settings prior to the activation.
Non-LDAP commands
The following commands are available in non-LDAP mode and are not available in
LDAP mode:
user_define
Defining a new user on the SA system.
user_update
Modifying the SA user's details.
user_rename
Renaming an SA user.
user_group_add_user
Adding a user the an SA Application Administrator user group.
user_group_remove_user
Removing a user from an SA application administration user group.
86
IBM Spectrum Accelerate: Product Overview
Chapter 13. Multi-Tenancy
Spectrum Accelerate allows allocation of storage resources to several independent
administrators, assuring that one administrator cannot access resources associated
with another administrator.
Multi-tenancy extends the Spectrum Accelerate approach to role-based access
control. In addition to associating the user with predefined sets of operations and
scope (the applications on which an operation is allowed), Spectrum Accelerate
enables the user to freely determine what operations are allowed, and where they
are allowed.
Multi-tenancy principles
The main idea of multi-tenancy is to allow an Spectrum Accelerate owner to
allocate storage resources to several independent administrators with the assurance
that one administrator cannot access resources associated with another
administrator.
This resource allocation is best described as a partitioning of the system's resources
to separate administrative domains. A domain is a subset, or partition, of the
system's resources. It is a named object to which users, pools, hosts/clusters,
targets, etc. may be associated. The domain restricts the resources a user can
manage to those associated with the domain.
A domain maintains the user relationships that exist on the Spectrum
Accelerate-level (when multi-tenancy is inactive).
A domain administrator is a user who is associated with a domain. The domain
administrator is restricted to performing operations on objects associated with a
specific domain.
The following access rights and restrictions apply to domain administrators:
v A user is created and assigned a role (for example: storage administrator,
application administrator, read-only).
v When assigned to a domain, the user retains his given role, limited to the scope
of the domain.
v Access to objects in a domain is restricted up to the point where the defined
user role intersects the specified domain access.
v By default, domain administrators cannot access objects that are not associated
with their domains.
Multi-tenancy offers the following benefits:
Partitioning
Spectrum Accelerate resources are partitioned to separate domains. The
domains are assigned to different tenants and each tenant administrator
gets permissions for a specific, or several domains, to perform operations
only within the boundaries of the associated domain(s).
Self-sufficiency
The domain administrator has a full set of permissions needed for
managing all of the domain resources.
© Copyright IBM Corp. 2016
87
Isolation
There is no visibility between tenants. The domain administrator is not
informed of resources outside the domain. These resources are not
displayed on lists, nor are their relevant events or alerts displayed.
User-domain association
A user can have a domain administrator role on more than one domain.
Users other than the domain administrator
Storage, security, and application administrators, as well as read-only
users, retain their right to perform the same operations that they have in a
non-domain-based environment. They can access the same objects under
the same restrictions.
Global administrator
The global administrator is not associated with any specific
domain, and determines the operations that can be performed by
the domain administrator in a domain.
This is the only user that can create, edit, and delete domains, and
associate resources to a domain.
An open or closed policy can be defined so that a global
administrator may, or may not, be able to see inside a domain.
Intervention of a global domain administrator, that has permissions
for the global resources of the system, is only needed for:
v Initial creation of the domain and assigning a domain
administrator
v Resolving hardware issues
User that is not associated with any domain
A user that is not associated with any domain has access rights to
all of the entities that are not uniquely associated with a domain.
88
IBM Spectrum Accelerate: Product Overview
Multi-tenancy concept diagram
The following figure displays a graphical depiction of multi-tenancy.
v The domain is an isolated set of storage resources.
v The domain administrator has access only to the specified domains.
v The global administrator can manage domains and assign administrators to
domains.
v Private objects are assigned to domains
v The domain maintains its connectivity to global objects, such as: users, hosts,
clusters, and targets. Hosts (and clusters) can server several domains. However,
hosts created by a domain administrator are assigned only to that domain.
Working with multi-tenancy
This section provides a general description about working with multi-tenancy and
its attributes.
Chapter 13. Multi-Tenancy
89
The domain administrator
The domain administrator has the following attributes:
v Prior to its association with a domain, the future domain administrator
(now a system administrator) has access to all non-domain entities, and
no access to domain-specific entities.
v When the storage administrator becomes a domain administrator all
access rights to non-domain entities are lost.
v The domain administrator can map volumes to hosts as long as both the
volume and the host belong to the domain.
v The domain administrator can copy and move volumes across pools as
long as the pools belong to domains administered by the domain
administrator.
v Domain administrators can manage snapshots for all volumes in their
domains.
v Domain administrators can manage consistency and snapshot groups for
all pools in their domains. Moving consistency groups across pools is
allowed as long as both source and destination pools are in the admin's
domains.
v Domain administrators can create and manage pools under the storage
constraint associated with their domain.
v Although not configurable by the domain administrator, hardware list,
and events are available for view-only to the domain administrator
within the scope of the domain.
v Commands that operate on objects not associated with a domain are not
accessible by the domain administrator.
Domain
The domain has the following attributes:
v Capacity - the domain is allocated with a capacity that is further allocated among
its pools. The domain provides an additional container in the hierarchy of what
was once system-pool-volume, and is now system-domain-pool-volume:
– The unallocated capacity of the domain is reserved to the domain's pools
– The sum of the hard capacity of the system's domains cannot exceed the total
hard capacity of the Spectrum Accelerate system.
– The sum of the soft capacity of the system's domains cannot exceed the total
soft capacity of the Spectrum Accelerate system.
v Maximum number of volumes per domain - the maximum number of volumes per
system is divided among the domains in a way that one domain cannot
consume all of the system resources at the expense of the other domains.
v Maximum number of pools per domain - the maximum number of pools per system
is divided among the domains in a way that one domain cannot consume all of
the system resources at the expense of the other domains.
v Maximum number of mirrors per domain - the maximum number of mirrors per
system is divided among the domains.
v Maximum number of consistency groups per domain - the maximum number of
consistency groups per system is divided among the domains.
v Performance class - the maximum aggregated bandwidth and IOPS is calculated
for all volumes of the domain, rather than on a system level.
v The domain has a string that identifies it for LDAP authentication.
90
IBM Spectrum Accelerate: Product Overview
Mirroring in a multi-tenancy environment
v The target, target connectivity and interval schedule are defined, edited and
deleted by the storage administrator.
v The domain administrator can create, activate and change properties to a
mirroring relation based on the previously defined target and target connectivity
that are associated with the domain.
v The remote target does not have to belong to a domain.
v Whenever the remote target belongs to a domain, it checks that the remote
target, pool and volume (if specified upon the mirror creation) all belong to the
same domain.
Chapter 13. Multi-Tenancy
91
92
IBM Spectrum Accelerate: Product Overview
Chapter 14. Non-disruptive code load
Non-disruptive code load (hot upgrade) enablesSpectrum Accelerate to upgrade its
software from a current version to a newer version without disrupting application
service.
The upgrade process is run on all modules in parallel and is designed to be quick
enough so that the applications' service on the hosts will not be damaged. The
upgrade requires that neither data migration nor rebuild processes are run, and
that all internal network paths are active.
During the non disruptive code load process there is a point in time dubbed the
'upgrade-point-of-no-return', before which the process can still be aborted (either
automatically by the system - or manually through a dedicated CLI). Once that
point is crossed - the Non-Disruptive Code Load process is not reversible.
Following are notable characteristics of the Non-disruptive code load:
Duration of the upgrade process
The overall process of downloading new code to storage system and
moving to the new code is done online to the application/Host.
The duration of the upgrade process is affected by the following factors:
v The upgrade process requires that you stop all IOs. If there are a lot of
IOs in the system, or there are slow disks, the system might not be able
to stop the IOs fast enough, so it will restart them and try again after a
short while, taking into consideration some retries.
v The upgrade process installs a valid version of the software and then
retains its local configuration. This process might take a considerable
amount of time, depending on the future changes in the structure of the
configuration.
Prerequisites and constraints
v The process cannot run if a data migration process or a rebuild process
is active. An attempt to start the upgrade process when either a data
migration or a rebuild process is active will fail.
v Generally, everything that happens after the point-of-no-return is treated
as if it happened after the upgrade is over.
v As long as the overall hot upgrade is in progress (up to several minutes)
no management operations are allowed (save for status querying), and
no events are processed.
v Prior to the point-of-no-return, a manual abort of the upgrade is
available.
Effect on mirroring
Mirrors are automatically deactivated before the upgrade, and reactivated
after it is over.
Effect on management operations
During the Non-Disruptive Code Load process it is possible to query the
system about the upgrade status, and the process can also be aborted
manually before the 'point-of-no-return'. If a failure occurs before this point
- the process will be aborted automatically.
© Copyright IBM Corp. 2016
93
Handling module or disk failure during the upgrade
If the failure occurs before the point-of-no-return, it will abort the upgrade.
If it happens after that point, the failure is treated as if it happened after
the upgrade is over.
Handling power failure during the upgrade
As for power failure before the point-of-no-return, power is being
monitored during the time the system prepares for the upgrade (before the
point-of-no-return). If a power failure is detected, the upgrade will be
aborted and the power failure will be taken care of by the old version.
94
IBM Spectrum Accelerate: Product Overview
Glossary
The following is an alphabetical list of terms and abbreviations that are used
throughout this product overview.
Active directory
Microsoft Active Directory (AD) provides directory (lookup), DNS and
authentication services.
Alerting event
An event that triggers recurring event notifications until it is cleared.
API
See Application program interface (API).
Application program interface (API)
The interface through which the application accesses the operating system
and the other services.
Authorization level
The authorization level determines the permitted access level to the
various functions of the GUI:
Read only
Only viewing is allowed.
Full
Enables access to all the configuration and control functions,
including shutdown of the system. This level requires a password.
Auto delete priority
As the storage capacity reaches its limits, snapshots are automatically
deleted to make more space. The deletion takes place according to the
value set for each snapshot, as follows:
1
last to be deleted
4
first to be deleted
Each snapshot is given a default auto delete priority of 1 at creation.
Clearing events
The process of stopping the recurring event notification of alerting events.
CLI
See Command line interface (CLI)
Command line interface (CLI)
The nongraphical user interface used to interact with the system through
set commands and functions. The CLI for the Spectrum Accelerate.
Completion code
The returned message sent as a result of running CLI commands.
Consistency group
A cluster of specific volumes that can all be snapshotted, mirrored and
administered simultaneously as a group. A volume can only be associated
with a single consistency group.
The volumes within a consistency group are grouped into a single volume
set. The volume set can be snapshotted into multiple snapshot sets under
the specific consistency group. See also Snapshot set, Volume set.
© Copyright IBM Corp. 2016
95
Coupling
The two peers (volumes or consistency groups) between which a mirroring
relationship was set.
Data module
A module dedicated to data storage. A fully-configured rack contains 9
dedicated data modules, each with 12 disks.
Destination
See Event destination.
Escalation
A process in which event notifications are sent to a wider list of event
destinations because the event was not cleared within a certain time.
Event destination
An address for sending event notifications.
Event notification rule
A rule that determines which users are to be notified, for which events and
by what means.
Event notification
The process of notifying a user about an event.
Event A user or system activity that is logged (with an appropriate message).
Fabric The hardware that connects workstations and servers to storage devices in
a SAN. The SAN fabric enables any-server-to-any-storage device
connectivity through the use of fibre-channel switching technology.
Functional area
One of the high level groupings of icons (functional modules) of the
left-hand pane of the GUI screen. For example: Monitor, Configuration or
Volume management. See Functional module.
Functional module
One of the icons of a functional area, on the left-hand pane of the GUI
screen. For example, System (under Monitor) or Hosts and LUNs (under
Configuration). See Functional area.
Graphical user interface (GUI)
On-screen user interface supported by a mouse and a keyboard.
H/W
Hardware.
HBA
Host bus adapter.
Host interface module
The interface data module serves external host requests with the ability to
store data. A fully-configured rack has 6 interface data modules.
Host
A host is a port name of a host that can connect to the system. The system
supports iSCSI hosts.
I/O
Input/output.
Image snapshot
A snapshot that has never been unlocked. It is the exact image of the
master volume it was copied from, at the time of its creation. See also
snapshot.
Internet Protocol
Specifies the format of packets (also called datagrams), and their
addressing schemes. See also Transmission Control Protocol (TCP).
96
IBM Spectrum Accelerate: Product Overview
IOPs
Input/output (I/O) per second.
IP
See Internet Protocol.
iSCSI Internet SCSI. An IP-based standard for linking data storage devices over a
network and transferring data by carrying SCSI commands over IP
networks.
Latency
Amount of time delay between the moment an operation is issued, and the
moment it is committed.
LDAP Lightweight Directory Access Protocol.
LDAP attribute
An attribute defined in an LDAP directory data model.
LDAP authentication
A method for authenticating users by validating the user's submitted
credentials against data stored on an LDAP directory.
LDAP directory
A hierarchical database stored on an LDAP server and accessed through
LDAP calls.
LDAP server
A server that provides directory services through LDAP.
LDAP status
The status of an LDAP server.
Load balancing
Even distribution of load across all components of the system.
Locking
Setting a volume (or snapshot) as unwritable (read-only).
LUN map
A table showing the mappings of the volumes to the LUNs.
LUN
Logical unit number. Exports a systems volume into a registered host.
Master volume
A volume that has snapshots is called the master volume of its snapshots.
MIB
Management information base. A database of objects that can be monitored
by a network management system. SNMP managers use standardized MIB
formats to monitor SNMP agents.
Microsoft Active directory
See Active Directory
Mirror peer
A peer (volume or consistency group) that is designated to be a replica of a
specified source peer data.
Mirroring
See Remote mirroring.
Modified State
A snapshot state. A snapshot in modified state can never be used for
restoring its master volume.
Multipathing
Enables host interface modules direct access to any volume.
Glossary
97
Peer
Denotes a constituent side of a coupling. Whenever a coupling is defined,
a designation is specified for each peer - one peer is designated primary
and the other is designated secondary.
Pool
See Storage pool.
Primary peer
A peer whose data is mirrored for backup on a remote storage system.
Rack
The cabinet that stores all of the hardware components of the system.
Remote mirroring
The process of replicating the content of a source peer (volume or
consistency group) to a designated mirror peer.
Remote target connectivity
A definition of connectivity between a port set of a remote target and a
module on the local storage system.
Remote target
An storage system on a remote site, used for mirroring, data migration,
and so on.
Role
Denotes the actual role that the peer is fulfilling as a result of a specific
condition, either a master or a slave.
Rule
See Event notification rule.
SAN
Storage area network.
SCSI
Small computer system interface.
Secondary peer
A peer that serves as a backup of a primary peer.
SMS gateway
An external server that is used to send SMSs.
SMTP gateway
An external host that is used to relay e-mail messages through the SMTP
protocol.
Snapshot set
The resulting set of synchronized snapshots of a volume set in a
consistency group. See also Consistency group, Volume set.
Snapshot
A point-in-time snapshot or copy of a volume. See also Image snapshot.
SNMP agent
A device that reports information through the SNMP protocol to SNMP
managers.
SNMP manager
A host that collects information from SNMP agents through the SNMP
protocol.
SNMP trap
An SNMP message sent from the SNMP agent to the SNMP manager,
where the sending is initiated by the SNMP agent and not as a response to
a message sent from the SNMP manager.
SNMP
Simple Network Monitor Protocol. A protocol for monitoring network
devices. See also MIB, SNMP agent, SNMP manager, SNMP trap.
98
IBM Spectrum Accelerate: Product Overview
Snooze
The process of sending recurring event notifications until the events are
cleared.
Storage pool
A reserved area of virtual disk space serving the storage requirements of
the volumes.
Sync best effort mode
A mode of remote mirroring in which I/O operations are not suspended
when communication between a primary and secondary volume is broken.
Synchronization
The process of making the primary volume and secondary volume
identical after a communication down time or upon the initialization of the
mirroring.
Target See Remote target.
TCP/IP
See Transmission Control Protocol, Internet Protocol.
Thin provisioning
Thin provisioning provides the ability to define logical volume sizes that
are much larger than the physical capacity installed on the system.
Transmission Control Protocol
Transmission Control Protocol (TCP) on top of the Internet Protocol (IP)
establishes a virtual connection between a destination and a source over
which streams of data can be exchanged. See also IP.
Trap
See SNMP trap.
Unassociated volume
A volume that is not associated with a consistency group. See Consistency
group.
Uninterruptible power supply
The uninterruptible power supply provides battery backup power for a
determined period of time, particularly to enable the system to power
down in a controlled manner, on the occurrence of a lengthy power outage.
Volume cloning
Creating a snapshot from a volume.
Volume set
A cluster of specific volumes in a consistency group, which can all be
snapshotted simultaneously, thus, creating a synchronized snapshot of all
of them. The volume set can be snapshotted into multiple snapshot sets of
the specific consistency group. See also Snapshot set, Volume set.
Volume
A discrete unit of storage on disk, tape or other data recording medium
that supports some form of identifier and parameter list, such as a volume
label or input/output control.
A volume is a logical address space, having its data content stored on the
systems disk drives. A volume can be virtually any size as long as the total
allocated storage space of all volumes does not exceed the net capacity of
the system. A volume can be exported to an attached host through a LUN.
A volume can be exported to multiple hosts simultaneously. See also
Storage pool, Unassociated volume.
Glossary
99
WWPN
World Wide Port Name
XCLI
IBM XIV command-line interface (XCLI) command set. See Command line
interface.
XDRP The disaster recovery program for Spectrum Accelerate – The remote
mirror feature of Spectrum Accelerate.
XIV-LDAP mapping
An association of data on the LDAP server (a specific LDAP attribute) and
data on the Spectrum Accelerate system. This is required to determine the
access rights that should be granted to an authenticated LDAP user.
100
IBM Spectrum Accelerate: Product Overview
Notices
These legal notices pertain to the information in this IBM Storage product
documentation.
This information was developed for products and services offered in the US. This
material may be available from IBM in other languages. However, you may be
required to own a copy of the product or product version in that language in order
to access it.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
USA
For license inquiries regarding double-byte character set (DBCS) information,
contact the IBM Intellectual Property Department in your country or send
inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
© Copyright IBM Corp. 2016
101
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
USA
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
The performance data discussed herein is presented as derived under specific
operating conditions. Actual results may vary.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Copyright and trademark
information website (www.ibm.com/legal/us/en/copytrade.shtml).
VMware, ESX, ESXi, vSphere, vCenter, and vCloud are trademarks or registered
trademarks of VMware Corporation in the United States, other countries, or both.
Microsoft, Windows Server, Windows, and the Windows logo are trademarks or
registered trademarks of Microsoft Corporation in the United States, other
countries, or both.
Other product and service names might be trademarks of IBM or other companies.
102
IBM Spectrum Accelerate: Product Overview
Index
A
access control 73
commands 84
access_all 77, 78
Accessing Data-at-Rest 61
Active Directory 81
administering
access control 73
administrator 87, 89, 90
advanced host attachment 12
advanced snapshot mechanism 23
alarm notification 6
algorithms 4, 6
application administrator 73, 76
access_all 77
command conditions 77
associations
user groups and hosts 76
asynchronous remote mirroring
role transmission 54
Atomic test & set 15
ATS 15
authentication 77
xiv 78
authentication modes
switching 83
auto delete priority 26
automatic
recovery from failure 4
automatic event notifications 6
B
backup
continuous 23, 26
bandwidth utilization 11
Block zeroing 14
C
cache
protection 5
CDP (continuous data protection) 26
CHAP 12
cleanup
IBM Hyper-Scale Mobility 57
CLI
management options 4
CLI (command line interface) 6
CLI management 9
clustering
hosts 13
commands
host attachment 14
configuration 80
multi-rack 6
configured sync rate 11
connectivity 11
consistency group 36
creating 33
© Copyright IBM Corp. 2016
consistency groups 6
and remote mirroring 50
overview 33
restore 36
restoring 33
snapshots 34, 35
continuous backup 23, 26
continuous data protection 23
Copy-on-Write (COW) 23, 26
COW (copy-on-write) 23
COW (Copy-on-Write) 23, 26
creating
consistency group 33
creating a vm 15
D
data migration
deleting 64
failures 66
I/O handling 63
overview 63
read requests 63
stages
activating 64
initial configuration 64
synchronization 64
testing 64
write requests 63
Data mirroring 4
data mobility 57
data virtualization 4, 6
Data-at-Rest 61
Data-in-Flight 61
defining gateways 70
destination
is synchronized 57
destination groups 69
destinations
defining 69
e-mail 69
SMS 69
detached media 61
diagnostics 6
disaster recovery 39, 51, 54
disaster recovery types 40
disconnects prevention 11
domain 87, 89, 90
domain administrator 87
Don't stop when found
a stop criteria 81
dr
disaster recovery 54
Dynamic rate adaptation 11
E
e-mail (SMTP) gateways
e-mail destination 69
e-mail notifications 4
70
e-mail-to-SMS gateways 70
error code protection 5
establishing a proxy
IBM Hyper-Scale Mobility 57
ESX
COMPARE AND WRITE 15
fast copy 15
SCSI2 reservations mechanism 15
ESXi
write zeroes 15
Ethernet connectivity 9
Ethernet ports 9
field technician ports 9
interconnect ports 9
iSCSI service ports 9
management ports 9
event
handling 67
information 67
notification rules 68
viewing 68
event notifications 6
external connection congestion 11
external replication mechanisms 6
F
failback 54
failover 54
fast copy 15
features and functionality 2
format
snapshot and snapshot group 31
Full copy 14
Full Volume Copy 30
G
gateways
defining 70
e-mail (SMTP) 70
e-mail-to-SMS 70
global spare storage 4
glossary 95
group rate limitation 16
groups, destination 69
GUI
management options 4
GUI (graphic user interface) 6
GUI management 9
gui/cli initiated LDAP login 80
H
HAK 12
hard capacity, depletion 18
Hardware-assisted locking 14, 15
HIPAA compliance 61
HIPAA Federal requirements 61
103
host
clustering 13
rate 16
host connectivity 11
host system attachment
hosts
associations 76
hot upgrade 93
Hyper-Scale vision 57
11
I
I/O
rate limitation 16
I/O operations 47
IBM Hyper-Scale (in general) 57
IBM Hyper-Scale Mobility 57
IBM XIV role
in relation to access control 79
image snapshots
duplicating 28
implementation
of LDAP 78
indirectly associated groups
of LDAP users 81
initial creation of a domain 87
initiator
iSCSI 12
instance 57
instant space reclamation 22
interconnect connectivity 10
internal snapshots 31
IP communication, system-initiated 9
IP connectivity 9
iSCSI CHAP authentication 12
Isolation
in domain-based multi-tenancy 87
K
key server inaccessibility
61
L
latency
overcoming latency that is inherent to
synchronous mirroring 51
LDAP
authentication 77, 78
authentication mode
switch to and from 83
authentication scenarios 81
directory 79
group mapping 79
service account 83
use cases 79, 80
user validation 82
LDAP authentication scenarios 80
LDAP server
definition 79
ldap_test 79
life-cycle
of a volume 21
Lightweight Directory Access
Protocol 78
104
limiting
host rates 16
load balancing 57
logical storage unit
migration 57
logical unit numbers
low sync rate 11
LUN array 12
LUN ID 31
LUN0 12
P
11
M
machine re-purposing 57
management connectivity 9
management options 4
managers, SNMP 69
mapping, LUN, 11
master volume 39
master volumes 21
max sync rate 11
maximum number of queries
as a search limit 81
mechanisms
self-healing 4
memberof
group attribute 81
methods
of access control 73
MIB 70
Microsoft Active Directory 78
mirroring
data 4
remote 39, 51
mirroring relation
establishing, following
IBM Hyper-Scale Mobility 57
modules
cache 5
multi-rack configuration 6
multi-tenancy 87, 89, 90
multipathing 6
Multiple findings
during a search within indirectly
associated groups 81
N
native
authentication 77
Non-disruptive code load 93
nonvolatile disk media 4
notifications
e-mail 4
SMS 4
SNMP 4
O
of an LDAP server 80
Off-line Data Migration
Open LDAP 78
options
management 4
over-provisioning 57
owner 87
IBM Spectrum Accelerate: Product Overview
57
Partitioning
in domain-based multi-tenancy
Performance classes
(QoS) 16
Planned service disruption 54
planning
IBM Hyper-Scale Mobility 57
post completion
IBM Hyper-Scale Mobility 57
predefined users 75
authentication 77
primary site 39
provisioning
thin 6
provisioning, thin 18
87
Q
QoS
performance classes
16
R
recovery
from a disaster 54
Recovery Key 61
Redirect-on-Write (ROW) 23, 26
reliability 4
remote mirroring 39
and consistency groups 50
basic concepts 39, 40
disaster recovery types 40
operation 39, 40
role switchover 45
synchronization 49
synchronous mirroring statuses 43
use of snapshots 40
remote monitoring 6
replication 51
replication mechanisms 6
resolving hardware issues 87
resource allocation 87, 89, 90
restoring 36
snapshots 29
volumes 29
restricted prefix
to a snapshot group 35
role switchover 45
when remote mirroring is not
operational 46
when remote mirroring is
operational 45
role transmission
within the asynchronous mirroring
process 54
role-based
access control 73
role-based access control
application administrator 76
configuring users 78
ROW (redirect-on-write) 23
ROW (Redirect-on-Write) 23, 26
S
scrubbing 4
SCSI error
while writing to a secondary
volume 47
search flow
of indirectly-associated groups 81
secondary site 39
self-healing
mechanism 4
self-healing mechanisms 4
Self-sufficiency
in domain-based multi-tenancy 87
set of permissions 87
set-up
IBM Hyper-Scale Mobility 57
single physical copy of data 27
slave volume 39
smis_user 75
SMS destination 69
SMS notifications 4
snapshot 27, 29
atomic procedure of creating a 27
format 31
storage utilization 26
Snapshot
association 26
name 26
serial number 26
snapshot group
format 31
snapshot groups 34, 35, 36
snapshot ID 31
snapshot management 6
snapshot policy
establishing, following
IBM Hyper-Scale Mobility 57
snapshots 23, 26
auto delete priority 26
depletion of hard capacity 18
duplicating 28
locking and unlocking 28
restoring 29
Snapshots 21, 23
snapshots, overview 21
snapshotting 23, 26
consistency groups 6
snapshot management 6
Snapshotting 23
SNMP 6, 70
SNMP agent 70
SNMP managers 69, 70
SNMP notification 70
SNMP notifications 4
SNMP trap
See SNMP notification
Source 57
spare storage 4
stolen media 61
stop criteria
for searching indirectly associated
groups 81
Stop when found
a stop criteria 81
storage
global spare 4
storage administrator 75
storage pool
depletion of hard capacity 18
hard and soft sizes 18
storage pools 6
moving volumes 17, 18
overview 17, 18
storage unit 57
Storage, security, and application
administrators and read-only users
in domain-based multi-tenancy 87
SUN directory 78
switchover 45
Symantec Storage Foundation Thin
Reclamation 22
sync job
snapshot that is part of a 31
sync rate
low 11
synchronized
remote mirroring 39, 51
synchronous mirroring
statuses 43
synchronous remote mirroring
I/O operations 47
system
hard and soft sizes 18
system attachment
see: host system attachment 11, 12
system resources 87
V
vm cloning 15
volumes 21
Full Volume Copy 30
hard and soft sizes 18
restoring 29
W
write zeroes
15
X
xiv authentication 78
XIV key Recovery 61
xiv owner 89
XIV owner 90
XIV role search
LDAP 82
XIV-to-LDAP mapping 80
T
technician 75
tenancy 87, 89, 90
the unmap bit 15
thin provisioning 6, 18
tracking the migration
IBM Hyper-Scale Mobility
transmission
of roles 54
57
U
Unintentional/erroneous role change 54
Unplanned service disruption 54
upgradability 8
use cases
LDAP 79, 80
user groups 76
associations 76
user roles
application administrator 73
operations administrator 73
permission levels 73
read only 73
storage administrator 73
technician 73
user search
LDAP 82
User-domain association
in domain-based multi-tenancy 87
Users other than the domain
administrator
in domain-based multi-tenancy 87
users validation
using LDAP 82
Index
105
106
IBM Spectrum Accelerate: Product Overview
IBM®
Printed in USA
GC27-6700-05