EMC Storage Analytics 3.4 Installation and User Guide


Add to my manuals
204 Pages

advertisement

EMC Storage Analytics 3.4 Installation and User Guide | Manualzz

EMC

®

Storage Analytics

Version 3.4

Installation and User Guide

302-001-532 REV 06

Copyright

©

2014-2015 EMC Corporation. All rights reserved. Published in the USA.

Published December, 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support ( https://support.emc.com

).

EMC Corporation

Hopkinton, Massachusetts 01748-9103

1-508-435-1000 In North America 1-866-464-7381 www.EMC.com

2

EMC Storage Analytics 3.4 Installation and User Guide

CONTENTS

Figures

Tables

Chapter 1

Chapter 2

Chapter 3

7

9

Introduction

13

Overview....................................................................................................... 14

References.................................................................................................... 16

Terminology.................................................................................................. 16

Installation and Licensing

19

Installation overview..................................................................................... 20

Installation and operating requirements........................................................22

Installing vRealize Operations Manager.........................................................25

Installing the EMC Adapter and dashboards.................................................. 26

Installing Navisphere CLI............................................................................... 27

Configuring a secure connection for VMAX adapters...................................... 27

Adapter instances......................................................................................... 29

Adding an EMC Adapter instance for vCenter.................................... 29

Configuring the vCenter Adapter.......................................................31

Adding an EMC Adapter instance for SCOM...................................... 32

Adding an EMC Adapter instance for OpenStack...............................33

Adding EMC Adapter instances for your storage system....................34

Editing EMC Adapter instances for your storage system....................39

EMC Storage Analytics Dashboards

41

Topology mapping.........................................................................................42

Isilon topology................................................................................. 43

ScaleIO topology..............................................................................45

VNX Block topology..........................................................................46

VNX File/eNAS topology................................................................... 47

VMAX topology.................................................................................48

VMAX3 topology...............................................................................49

VNXe topology................................................................................. 51

VPLEX Local topology....................................................................... 52

VPLEX Metro topology...................................................................... 53

vVNX topology..................................................................................55

XtremIO topology............................................................................. 56

RecoverPoint for Virtual Machines topology......................................57

EMC dashboards........................................................................................... 58

Storage Topology dashboard............................................................58

Storage Metrics dashboard.............................................................. 59

Isilon Overview dashboard...............................................................59

ScaleIO Overview dashboard............................................................60

VNX Overview dashboard................................................................. 62

VMAX Overview dashboard.............................................................. 64

VNXe Overview dashboard............................................................... 67

EMC Storage Analytics 3.4 Installation and User Guide

3

CONTENTS

4

Chapter 4

Chapter 5

Chapter 6

Chapter 7

VPLEX Overview dashboard..............................................................68

VPLEX Performance dashboard.........................................................69

VPLEX Communication dashboard....................................................71

XtremIO Overview dashboard........................................................... 72

XtremIO Performance dashboard......................................................73

RecoverPoint for VMs Overview dashboard.......................................73

RecoverPoint for VMs Performance dashboard................................. 74

Topology dashboards.......................................................................76

Metrics dashboards......................................................................... 77

Top-N dashboards............................................................................78

Dashboard XChange.........................................................................80

Resource Kinds and Metrics

81

Isilon metrics................................................................................................ 82

ScaleIO metrics............................................................................................. 86

VNX Block metrics......................................................................................... 93

VNX File/eNAS metrics................................................................................ 102

VMAX metrics..............................................................................................109

VNXe metrics...............................................................................................113

VPLEX metrics............................................................................................. 121

XtremIO metrics.......................................................................................... 132

RecoverPoint for Virtual Machines metrics...................................................138

Views and Reports

143

eNAS views and reports...............................................................................144

ScaleIO views and reports........................................................................... 145

VMAX views and reports..............................................................................147

VNX and VNXe views and reports.................................................................151

XtremIO views and reports.......................................................................... 159

Remedial Actions on EMC Storage Systems

163

Remedial actions overview.......................................................................... 164

Clearing matrix queries on vVNX..................................................................164

Changing the service level objective (SLO) for a VMAX3 storage group.........164

Changing the tier policy for a VNXe File system............................................165

Changing the tier policy for a VNX or VNXe LUN............................................165

Expanding VMAX devices............................................................................ 165

Extending file system capacity on VNXe storage.......................................... 166

Enabling performance statistics for VNX Block.............................................166

Enabling FAST Cache on VNXe storage pools............................................... 166

Enabling FAST Cache on a VNX Block storage pool.......................................167

Expanding LUN capacity on VNX or VNXe..................................................... 167

Extending file system capacity on VNX or eNAS storage............................... 168

Migrating a VNX LUN to another storage pool.............................................. 168

Rebooting a Data Mover on VNX storage......................................................168

Rebooting a VNX storage processor............................................................. 169

Extending volumes on EMC XtremIO storage systems.................................. 169

Troubleshooting

171

Badges for monitoring resources.................................................................172

Navigating inventory trees...........................................................................172

EMC Storage Analytics 3.4 Installation and User Guide

CONTENTS

Symptoms, alerts, and recommendations for EMC Adapter instances..........173

Event correlation......................................................................................... 175

Viewing all alerts............................................................................175

Finding resource alerts...................................................................176

Locating alerts that affect the health score for a resource............... 176

List of alerts and notifications........................................................ 176

Launching Unisphere.................................................................................. 200

Installation logs.......................................................................................... 200

Log Insight overview....................................................................................201

Log Insight configuration................................................................201

Sending logs to Log Insight............................................................ 202

Error handling and event logging................................................................. 204

Viewing error logs.......................................................................... 204

Creating and downloading a support bundle.................................. 204

Log file sizes and rollover counts.................................................................205

Finding adapter instance IDs..........................................................205

Configuring log file sizes and rollover counts..................................205

Activating configuration changes................................................... 206

Verifying configuration changes..................................................... 206

Editing the Collection Interval for a resource................................................207

Configuring the thread count for an adapter instance.................................. 207

Connecting to vRealize Operations Manager by using SSH.......................... 208

Frequently asked questions........................................................................ 209

EMC Storage Analytics 3.4 Installation and User Guide

5

FIGURES

7

8

5

6

3

4

1

2

9

10

11

12

13

EMC Adapter architecture.............................................................................................. 15

Isilon topology...............................................................................................................44

ScaleIO topology........................................................................................................... 45

VNX Block topology....................................................................................................... 46

VNX File/eNAS topology.................................................................................................47

VMAX topology.............................................................................................................. 48

VMAX3 topology............................................................................................................ 49

VNXe topology............................................................................................................... 51

VPLEX Local topology.....................................................................................................52

VPLEX Metro topology....................................................................................................54

vVNX topology............................................................................................................... 55

XtremIO topology...........................................................................................................56

RecoverPoint for Virtual Machines topology................................................................... 57

EMC Storage Analytics 3.4 Installation and User Guide

7

TABLES

29

30

31

32

25

26

27

28

21

22

23

24

17

18

19

20

13

14

15

16

9

10

11

12

7

8

5

6

3

4

1

2

45

46

47

48

41

42

43

44

37

38

39

40

33

34

35

36

49

50

51

Required software licenses ........................................................................................... 20

Dashboard-to-product matrix.........................................................................................58

Cluster metrics.............................................................................................................. 82

Node metrics................................................................................................................. 83

System metrics.............................................................................................................. 86

Protection Domain metrics.............................................................................................87

Device metrics............................................................................................................... 87

ScaleIO Data Server metrics...........................................................................................88

Storage pool metrics......................................................................................................89

Snapshot metrics.......................................................................................................... 90

MDM cluster metrics......................................................................................................91

MDM metrics................................................................................................................. 91

SDC metrics...................................................................................................................91

Fault Set metrics............................................................................................................91

Volume metrics..............................................................................................................91

VNX Block metrics for Array............................................................................................93

VNX Block metrics for Disk.............................................................................................93

VNX Block metrics for FAST Cache..................................................................................94

VNX Block metrics for Pool LUN......................................................................................95

VNX Block metrics for RAID Group.................................................................................. 96

VNX Block metrics for RAID Group LUN........................................................................... 97

VNX Block metrics for Storage Pool Front-end Port......................................................... 98

VNX Block metrics for Storage Pool................................................................................ 98

VNX Block metrics for Storage Processor......................................................................100

VNX Block metrics for Tier............................................................................................ 101

VNX File/eNAS metrics for Array................................................................................... 102

VNX File/eNAS metrics for Data Mover......................................................................... 102

VNX File/eNAS metrics for dVol....................................................................................107

VNX File/eNAS metrics for File Pool..............................................................................107

VNX File/eNAS metrics for File System......................................................................... 107

VMAX metrics for Device.............................................................................................. 109

VMAX metrics for FAST VP Policy.................................................................................. 109

VMAX metrics for Front-End Director ............................................................................110

VMAX metrics for Front-End Port ..................................................................................110

VMAX metrics for Remote Replica Group ..................................................................... 110

VMAX metrics for SRDF Director ...................................................................................110

VMAX metrics for Storage Group.................................................................................. 111

VMAX metrics for Thin Pool ......................................................................................... 112

VMAX3 metrics for Storage Resource Pool....................................................................112

VNXe metrics for EMC Adapter Instance....................................................................... 113

VNXe metrics for Disk.................................................................................................. 113

VNXe metrics for Fast Cache.........................................................................................114

VNXe metrics for File System........................................................................................114

VNXe metrics for LUN................................................................................................... 114

VNXe metrics for Storage Pool......................................................................................115

VNXe metrics for Storage Processor............................................................................. 115

VNXe metrics for Tier....................................................................................................119

VNXe metrics for vVNX virtual volumes (vVols)............................................................. 120

VNXe metrics for vVNX virtual disk............................................................................... 120

VPLEX metrics for Cluster............................................................................................. 121

VPLEX metrics for Director ...........................................................................................122

EMC Storage Analytics 3.4 Installation and User Guide

9

TABLES

10

96

97

98

99

92

93

94

95

88

89

90

91

84

85

86

87

100

101

102

103

104

105

106

107

80

81

82

83

76

77

78

79

72

73

74

75

68

69

70

71

64

65

66

67

60

61

62

63

56

57

58

59

52

53

54

55

VPLEX metrics for Distributed Device ...........................................................................125

VPLEX metrics for Engine .............................................................................................126

VPLEX metrics for Ethernet Port ................................................................................... 127

VPLEX metrics for Extent Device .................................................................................. 127

VPLEX metrics for Fibre Channel Port ...........................................................................127

VPLEX metrics for Local Device ....................................................................................128

VPLEX metrics for Storage Array .................................................................................. 129

VPLEX metrics for Storage View....................................................................................129

VPLEX metrics for Storage Volume ...............................................................................129

VPLEX metrics for Virtual Volume ................................................................................ 130

VPLEX metrics for VPLEX Metro ....................................................................................131

XtremIO metrics for Cluster.......................................................................................... 132

XtremIO metrics for Data Protection Group...................................................................134

XtremIO metrics for Snapshot...................................................................................... 134

XtremIO metrics for SSD...............................................................................................135

XtremIO metrics for Storage Controller......................................................................... 135

XtremIO metrics for Volume......................................................................................... 136

XtremIO metrics for X-Brick

....................................................................................... 137

RecoverPoint metrics for Cluster.................................................................................. 138

RecoverPoint metrics for Consistency Group................................................................ 139

RecoverPoint metrics for Copy......................................................................................139

RecoverPoint metrics for Journal Volume......................................................................139

RecoverPoint metrics for Link.......................................................................................140

RecoverPoint metrics for virtual RecoverPoint Appliance (vRPA)................................... 140

RecoverPoint metrics for RecoverPoint for Virtual Machines System............................. 140

RecoverPoint metrics for Replication Set......................................................................141

RecoverPoint metrics for Repository Volume ............................................................... 141

RecoverPoint metrics for Splitter ................................................................................. 141

RecoverPoint metrics for User Volume..........................................................................141

eNAS Data Mover (in use) ........................................................................................... 144 eNAS dVol (in use).......................................................................................................144

eNAS File pool (in use).................................................................................................144

eNAS File system ........................................................................................................ 145

ScaleIO Volume .......................................................................................................... 145

ScaleIO Protection Domain ......................................................................................... 146

ScaleIO SDC................................................................................................................ 146

ScaleIO SDS................................................................................................................ 146

VMAX Storage Group list..............................................................................................148

VMAX Device............................................................................................................... 148

VMAX Front-End Director..............................................................................................149

VMAX Front-End Port List..............................................................................................149

VMAX Thin Pool List..................................................................................................... 149

VMAX FAST VP Policy List............................................................................................. 149

VMAX SRDF Directory List.............................................................................................150

VMAX Remote Replica Group List................................................................................. 150

VMAX Storage Resource Pool ...................................................................................... 150

Alerts...........................................................................................................................152

VNX Data Mover...........................................................................................................152

VNX File System...........................................................................................................153

VNX File Pool .............................................................................................................. 153

VNX dVol .................................................................................................................... 154

VNX LUN ..................................................................................................................... 154

VNX Tier.......................................................................................................................155

VNX FAST Cache...........................................................................................................155

VNX Storage Pool.........................................................................................................155

VNX Disk......................................................................................................................155

EMC Storage Analytics 3.4 Installation and User Guide

TABLES

132

133

134

135

136

137

138

139

124

125

126

127

128

129

130

131

116

117

118

119

120

121

122

123

108

109

110

111

112

113

114

115

VNX Storage Processor ................................................................................................156

VNX Storage Processor Front End Port ......................................................................... 156

VNX RAID Group ..........................................................................................................157

VNXe File System ........................................................................................................ 157

VNXe LUN ................................................................................................................... 157

VNXe Tier .................................................................................................................... 158

VNXe Storage Pool ...................................................................................................... 158

VNXe Disk ...................................................................................................................158

VNXe Storage Processor ..............................................................................................159

XtremIO cluster capacity consumption ........................................................................ 160

XtremIO health state....................................................................................................160

XtremIO LUN ............................................................................................................... 160

XtremIO performance ..................................................................................................161

XtremIO storage efficiency .......................................................................................... 161

System alerts...............................................................................................................177

Protection Domain alerts............................................................................................. 177

Device/Disk alerts....................................................................................................... 177

SDS alerts....................................................................................................................178

Storage Pool alerts...................................................................................................... 178

SDC alerts....................................................................................................................178

MDM Cluster alerts...................................................................................................... 179

VMAX alerts................................................................................................................. 179

VNX Block alerts.......................................................................................................... 180

VNX Block notifications............................................................................................... 184

VNX File alerts............................................................................................................. 186

VNX File notifications...................................................................................................188

VNXe alerts..................................................................................................................192

VPLEX alerts................................................................................................................ 194

XtremIO alerts based on external events......................................................................197

XtremIO alerts based on metrics.................................................................................. 197

RecoverPoint for Virtual Machines alerts based on message event symptoms..............198

List of RecoverPoint for Virtual Machines alerts based on metrics................................ 199

EMC Storage Analytics 3.4 Installation and User Guide

11

CHAPTER 1

Introduction

This chapter contains the following topics: l l l

Overview

............................................................................................................... 14

References

............................................................................................................ 16

Terminology

.......................................................................................................... 16

Introduction

13

Introduction

Overview

VMware vRealize Operations Manager is a software product that collects performance and capacity data from monitored software and hardware resources. It provides users with realtime information about potential problems in the enterprise.

vRealize Operations Manager presents data and analysis in several ways: l l

Through alerts that warn of potential or occurring problems

In configurable dashboards and predefined pages that show commonly needed information l

In predefined reports

EMC

®

Storage Analytics links vRealize Operations Manager with the EMC Adapter. The

EMC Adapter is bundled with a connector that enables vRealize Operations Manager to collect performance metrics. The adapter is installed with the vRealize Operations

Manager user interface.

The collector types are shown in

Figure 1 on page 15 .

EMC Storage Analytics uses the power of existing vCenter features to aggregate data from multiple sources and process the data with proprietary analytic algorithms.

EMC Storage Analytics complies with VMware management pack certification requirements and has received the VMware Ready certification.

14

EMC Storage Analytics 3.4 Installation and User Guide

Figure 1 EMC Adapter architecture

Introduction

Note

Refer to the EMC Storage Analytics Release Notes for a list of supported product models.

Overview

15

Introduction

References

This topic provides a list of documentation for reference.

VMwarevRealize Operations Managerdocumentation l l l vRealize Operations Manager Release Notes contains descriptions of known issues and workarounds.

vRealize Operations Manager vApp Deployment and Configuration Guide explains installation, deployment, and management of vRealize Operations Manager.

vRealize Operations Manager User Guide explains basic features and use of vRealize

Operations Manager.

l vRealize Operations Manager Customization and Administration Guide describes how to configure and manage the vRealize Operations Manager custom interface.

VMware documentation is available at http://www.vmware.com/support/pubs .

EMC documentation l l

EMC Storage Analytics Release Notes provides a list of the latest supported features, licensing information, and known issues.

EMC Storage Analytics Installation and User Guide (this document) provides installation and licensing instructions, a list of resource kinds and their metrics, and information about storage topologies and dashboards.

Terminology

16

This topic contains a list of commonly used terms.

adapter

A vRealize Operations Manager component that collects performance metrics from an external source like a vCenter or storage system. Third-party adapters such as the

EMC Adapter are installed on the vRealize Operations Manager server to enable creation of adapter instances within vRealize Operations Manager.

adapter instance

A specific external source of performance metrics, such as a specific storage system.

An adapter instance resource is an instance of an adapter that has a one-to-one relationship with an external source of data, such as a VNX storage system.

dashboard

A tab on the home page of the vRealize Operations Manager GUI. vRealize

Operations Manager ships with default dashboards. Dashboards are also fully customizable by the end user.

health rating

An overview of the current state of any resource, from an individual operation to an entire enterprise. vRealize Operations Manager checks internal metrics for the resource and uses its proprietary analytics formulas to calculate an overall health score on a scale of 0 to 100.

icon

A pictorial element in a widget that enables a user to perform a specific function.

Hovering over an icon displays a tooltip that describes the function.

EMC Storage Analytics 3.4 Installation and User Guide

Introduction metric

A category of data collected for a resource. For example, the number of read operations per second is one of the metrics collected for each LUN resource.

resource

Any entity in the environment for which vRealize Operations Manager can collect data. For example, LUN 27 is a resource.

resource kind

A general type of a resource, such as LUN or DISK. The resource kind dictates the type of metrics collected.

widget

An area of the EMC Storage Analytics graphical user interface (GUI) that displays metrics-related information. A user can customize widgets to their own environments.

Terminology

17

CHAPTER 2

Installation and Licensing

This chapter contains the following topics: l l l l l l l

Installation overview

............................................................................................. 20

Installation and operating requirements

................................................................22

Installing vRealize Operations Manager

.................................................................25

Installing the EMC Adapter and dashboards

.......................................................... 26

Installing Navisphere CLI

....................................................................................... 27

Configuring a secure connection for VMAX adapters

.............................................. 27

Adapter instances

................................................................................................. 29

Installation and Licensing

19

Installation and Licensing

Installation overview

EMC Storage Analytics consists of the following installation packages: l l vRealize Operations Manager—Provides a view of all resources managed by vCenter, including EMC storage arrays

EMC Adapter—Enables the collection of metrics from EMC resources. The adapter installation includes instructions for: n n

Installing the EMC Adapter and dashboards

Adding one or more EMC Adapter instances and applying license keys from EMC

Installation and upgrade options

Review the

Installation and operating requirements on page 22

, and then refer to the instructions for one of the following options to install or upgrade your system.

Instructions Option

Install VMware vRealize

Operations Manager 6.1 and

EMC Storage Analytics 3.4

l l l

Installing vRealize Operations Manager on page 25

Installing the EMC Adapter and dashboards on page 26

Adding EMC Adapter instances for your storage system on page 34

Install EMC Storage Analytics

3.4 on a system running

VMware vRealize Operations

Manager 6.1

l l

Upgrade EMC Adapter 3.3 to

EMC Storage Analytics 3.4 on a system running VMware vRealize Operations Manager

6.1

Installing the EMC Adapter and dashboards on page 26

Adding EMC Adapter instances for your storage system on page 34

1. Install a new instance of vRealize Operations Manager 6.1.

See

Installing vRealize Operations Manager on page 25 .

2. Install EMC Storage Analytics 3.4 on vRealize Operations

Manager 6.1. See

Installing the EMC Adapter and dashboards on page 26

and Adding EMC Adapter instances for your storage system on page 34 .

3. If you are using vCenter Operations Manager 5.8.x, migrate the data to the new vRealize Operations Manager 6.1 system.

Note

Refer to the vRealize Operations Manager vApp Deployment and

Configuration Guide for information about migration-based upgrades to vRealize Operations Manager 6.1.

20

License requirements

Table 1 on page 20 lists the licensing requirements.

Table 1 Required software licenses

Software Required license vRealize Operations

Manager (Advanced or

Enterprise)

VMware license for vRealize Operations

Notes

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Table 1 Required software licenses (continued)

Software Required license Notes

Manager (Advanced or

Enterprise)

EMC Storage Analytics EMC Storage Analytics electronic or physical license

EMC storage arrays EMC license for your storage array

If you purchase an electronic license for EMC

Storage Analytics, you will receive a letter that directs you to an electronic licensing system to activate the software to which you are entitled.

Otherwise, you will receive a physical license key.

A 90-day trial license for all supported products is available with EMC Storage Analytics. The 90day trial license provides the same features as a permanent license, but after 90 days of usage, the adapter stops collecting data. You can add a permanent license at any time.

Installation overview

21

Installation and Licensing

Installation and operating requirements

Before installing the EMC Adapter, verify that these installation and operating requirements are satisfied.

EMC Adapter requirements

22

Note

The ESA space on the EMC Community Network provides more information about installing and configuring EMC Storage Analytics.

Supported vRealize Operations Manager versions vRealize Operations Manager Advanced or Enterprise editions from VMware

Note

EMC Storage Analytics does not support vRealize Operations Manager Foundation and Standard editions.

Deploy the vApp for vRealize Operations Manager before installing the EMC Adapter.

Check the vRealize Operations Manager vApp Deployment and Configuration Guide at http://www.vmware.com/support/pubs for system requirements pertaining to your version of vRealize Operations Manager.

Supported product models

See the EMC Storage Analytics Release Notes for a complete list of supported product models.

Supported web browser

See the latest vRealize Operations Manager release notes for a list of supported browsers.

ScaleIO systems

EMC Storage Analytics uses REST APIs to interact with ScaleIO systems. Specify the

IP address and port of the ScaleIO Gateway to configure the ScaleIO collector.

VNX Block systems

The EMC Adapter uses naviseccli to collect metrics from VNX Block systems. It is bundled into the EMC Adapter install file and is automatically installed along with the adapter. Storage processors require IP addresses that are reachable from the vRealize Operations Manager server. Bidirectional traffic for this connection flows through port 443 (HTTPS). Statistics logging must be enabled on each storage processor (SP) for metric collection (System

>

System Properties

>

Statistics Logging in Unisphere).

VNX File/eNAS systems

CLI commands issued on the Control Station direct the EMC Adapter to collect metrics from VNX File and eNAS systems. The Control Station requires an IP address that is reachable from the vRealize Operations Manager server. Bidirectional

Ethernet traffic flows through port 22 using Secure Shell (SSH). If you are using the

EMC VNX nas_stig script for security (/nas/tools/nas_stig), do not use root in the password credentials. Setting nas_stig to On limits direct access for root accounts, preventing the adapter instance from collecting metrics for VNX File and eNAS.

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

EMC SMI-S Provider for VMAX requirements

The EMC SMI-S Provider for VMAX must be on the network and accessible through either secure port 5989 (default) or nonsecure port 5988 for communication. It is not possible to use a customized port. If the EMC SMI-S Provider is also used for vSphere vStorage API for Storage Awareness (VASA), follow the recommendations in the SMI-

S Provider Release Notes to increase the number of available connections. The user configured in the EMC Adapter instance and connecting to the SMI-S instance must have the role of monitor or administrator. See

Configuring a secure connection to a

VMAX Adapter on page 27

for steps to configure secure port 5989. To monitor all metrics for Storage Resource Pools and SRDF Directors, register a Unisphere server with the EMC SMI-S Provider monitoring the VMAX array. To do this, use the reg command in TestSmiProvider under the Active menu.

Note

The configured EMC SMI-S Provider must discover the VMAX array as "Local" and not

"Remote." In an EMC Symmetrix Remote Data Facility (SRDF) environment, if a local

VMAX has an SRDF relationship with another array, the remote array is listed by SMI-

S as "Remote." To configure this array in EMC Storage Analytics, you must use a different EMC SMI-S Provider that sees the remote array as "Local." The other array in that environment would be "Remote." EMC Storage Analytics now prevents adding an array with an EMC SMI-S Provider that sees the array as "Remote." An error message informs you that remote arrays are not allowed and asks you to choose a different EMC SMI-S Provider for that array.

VPLEX EMC Adapter instance

Only one EMC Adapter instance is required for VPLEX Local or VPLEX Metro. You can monitor both clusters in a VPLEX Metro by adding a single EMC Adapter instance for one of the clusters. Adding an EMC Adapter instance for each cluster in a VPLEX

Metro system introduces unnecessary stress on the system.

VPLEX data migrations

EMC VPLEX systems are commonly used to perform non-disruptive data migrations.

When monitoring a VPLEX system with EMC Storage Analytics, a primary function is to perform analytics of trends on the storage system. When swapping a back-end storage system on VPLEX system, the performance and trends for the entire VPLEX storage environment are impacted. Therefore, EMC recommends that you start a new

EMC Storage Analytics baseline for the VPLEX system after data migration. To start a new baseline:

1. Before you begin data migration, delete all resources associated with the existing EMC Storage Analytics VPLEX adapter instance.

2. Remove the existing EMC Storage Analytics VPLEX adapter instance by using the

Manage Adapter Instances dialog.

3. Perform the data migration.

4. Create a new EMC Storage Analytics VPLEX adapter instance to monitor the updated VPLEX system.

Optionally, you can stop the VPLEX adapter instance collects during the migration cycle. When collects are restarted after the migration, orphaned VPLEX resources will appear in EMC Storage Analytics, but those resources will be unavailable. Remove the resources manually.

Installation and operating requirements

23

Installation and Licensing

XtremIO

EMC Storage Analytics uses REST APIs to interact with XtremIO arrays. Users must specify the IP address of the XtremIO Management Server (XMS) when adding an

EMC Adapter for XtremIO and the serial number of the XtremIO Cluster to monitor when adding an EMC Adapter instance for XtremIO.

If enhanced performance is required, administrators can configure the thread count for the XtremIO adapter instance. See

Configuring the thread count for an adapter instance on page 207

.

Minimum OE requirements

See the EMC Storage Analytics Release Notes for a complete list of minimum Operating

Environment (OE) requirements for supported product models.

User accounts

To create an EMC Adapter instance for a storage array, you must have a user account that allows you to connect to the storage array or EMC SMI-S Provider. For example, to add an EMC Adapter for a VNX array, use a global account with an operator or administrator role (a local account will not work).

To create an EMC Adapter instance for vCenter (where Adapter Kind = EMC Adapter and Connection Type = VMware vSphere), you must have an account that allows you access to vCenter and the objects it monitors. In this case, vCenter enforces access credentials (not the EMC Adapter). To create an EMC Adapter instance for vCenter, use, at minimum, an account assigned to the Read-Only role at the root of vCenter, and enable propagation of permissions to descendant objects. Depending on the size of the vCenter, wait approximately 30 seconds before testing the EMC Adapter.

More information on user accounts and access rights is available in the vSphere

API/SDK Documentation (see information about authentication and authorization for

ESXi and vCenter Server). Ensure that the adapter points to the vCenter server that is monitored by vRealize Operations Manager.

DNS configuration

To use the EMC Adapter, the vRealize Operations Manager vApp requires network connectivity to the storage systems to be monitored. DNS must be correctly configured on the vRealize Operations Manager server to enable hostname resolution by the EMC Adapter.

Time zone and synchronization settings

Ensure time synchronization for all EMC Storage Analytics resources by using

Network Time Protocol (NTP). Also, set correct time zones for EMC Storage Analytics resources (including the EMC SMI-S Provider if using an adapter for VMAX) and related systems. Failure to observe these practices may affect the collection of performance metrics and topology updates.

24

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Installing vRealize Operations Manager

Before you begin l l

Obtain the OVA installation package for vRealize Operations Manager from VMware.

Obtain a copy of the vRealize Operations Manager vApp Deployment and Configuration

Guide at http://www.vmware.com/support/pubs .

Refer to the vRealize Operations Manager vApp Deployment and Configuration Guide to deploy the vApp for vRealize Operations Manager.

Procedure

1. Review the system requirements.

2. Follow the instructions to install vRealize Operations Manager and use the VMware license that you received when prompted to assign the vRealize Operations Manager license.

3. Conclude the installation by following instructions to verify the vRealize Operations

Manager installation.

Installing vRealize Operations Manager

25

Installation and Licensing

Installing the EMC Adapter and dashboards

Before you begin

Obtain the PAK file for the EMC Adapter.

Note

If using Internet Explorer, the installation file downloads as a ZIP file but functions the same way as the PAK file.

WARNING

When you upgrade EMC Storage Analytics the standard EMC dashboards are overwritten. To customize a standard EMC dashboard, clone it, rename it, and then customize it.

To install the EMC Adapter and dashboards:

Procedure

1. Save the PAK file in a temporary folder.

2. Start the vRealize Operations Manager administrative user interface in your web browser and log in as an administrator.

For example, enter

https://<vROPs_ip_address>

.

3. Select Administration

>

Solutions and then click the Add (plus) sign to upload the PAK file.

A message similar to this one is displayed in the Add Solution window:

The .pak file has been uploaded and is ready to install.

pak file details

Name EMC Adapter

Description Manages EMC systems such as VNX, VMAX...

Version 3.3

4. Click Next, read the license agreement, and select the check box to indicate agreement. Click Next again.

Installation begins. Depending on your system's performance, the installation can take from 5 to 15 minutes to complete.

5. When the installation completes, click the Finish button.

The EMC Adapter appears in the list of installed solutions.

26

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Installing Navisphere CLI

For VNX Block systems, the Navisphere CLI (naviseccli) must be installed in the Data

Node that you assign to collect metrics for VNX. The naviseccli-binxxx -rpm is available in the ESA package.

Note

For vRealize Operations Manager 6.1 or later, the Navisphere CLI is automatically installed on all Data Nodes that are available during the initial installation. If you add more nodes in vRealize Operations Manager cluster after ESA is installed or if you are using vRealize Operations Manager 6.0 or earlier, use the following procedure to manually install the Navisphere CLI.

Install the CLI before you add the EMC Adapter instance to vRealize Operations Manager.

If the CLI is not installed, errors could occur in scaled-out vCenter environments that consist of a Master Node and multiple Data Nodes. The CLI is automatically installed on the Master Node. However, because the Data Node collects metrics, the EMC Adapter might report errors if naviseccli is not installed.

Procedure

1. Enable Secure Shell (SSH) for both master and data nodes.

Refer to Connecting to vRealize Operations Manager by using SSH on page 208

for instructions.

2. Extract the pak file by using decompression software such as WinZip.

3. Copy the naviseccli-bin-<version>.rpm file (for example, navisecclibin-7.33.1.0.33-x64.rpm) to a target directory in the data node. If you are using Windows, you can use WinSCP for the copy operation.

4. Establish a secure connection to the data node and change to the target directory.

5. Run this command:

rpm -i naviseccli-bin-<version>.rpm

where <version> is the appropriate version of the naviseccli utility for the node.

6. Repeat this procedure to install naviseccli in other nodes, as required.

Configuring a secure connection for VMAX adapters

You have the option to add a VMAX adapter instance on a secure connection through port

5989.

The following procedure provides instructions for configuring a secure connection for a

VMAX adapter instance. If your deployment does not require a secure connection, skip this procedure.

Procedure

1. Download the vRealize Operations Manager SSL certificate by using one of the following methods: l

If Firefox is available on a client: a. Point your web browser to the vRealize Operations Manager/EMC Storage

Analytics name or IP address (any node inside a cluster). The page opens.

b. Under Certificate Warning, click I Understand the Risks.

Installing Navisphere CLI

27

Installation and Licensing l l c. Click Add Exception. page opens.

d. In the Add Security Exception window, click View.

e. On the Details tab, click Export and save the certificate as an All Files type from the Save as type drop-down menu.

f. Close the Certificate View window, and then click Confirm Security Exceptions to continue.

g. Use a text editor to display the certificate content, and copy the content into the clipboard.

If vCenter is accessible and vRealize Operations Manager console is accessible from vCenter: a. Log in to vCenter through the vSphere Client.

b. Select Inventory

>

Hosts and clusters.

c. Select the vRealize Operations Manager vAppliance, and right-click to select

Open Console.

d. Press the Enter key, press Alt+F1 to access the command console and log in as root (using the root password).

e. Type

cat /storage/vcops/user/conf/ssl/cacert.pem

and copy the displayed contents into the clipboard.

If vRealize Operations Manager SSHD is enabled and SSH terminal is accessible: a. Log into the vApp as root (using the root password).

b. Type

cat /storage/vcops/user/conf/ssl/cacert.pem

, and copy the displayed contents into the clipboard.

2. Access the EMC SMI-S Provider Ecomconfig site at

https://<ip_address>:5989/

Ecomconfig

, and log in as admin/#1Password (default).

3. Click SSL Certificate Management, and then select option 3 by clicking Import CA

Certificate file.

4. Paste the content from the clipboard into the textbox and click Submit the Certificate.

5. In vRealize Operations Manager, click Test connection in the EMC Adapter window.

Results

A secure connection through port 5989 is established for the VMAX adapter.

28

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Adapter instances

The vRealize Operations Manager requires an adapter instance for each resource to be monitored. The instance specifies the type of adapter to use and the information needed to identify and access the resource.

With EMC Storage Analytics, the vRealize Operations Manager uses EMC Adapter instances to identify and access the resources. Supported adapter instances include: l l l l l l l l vCenter (prerequisite for other adapter instances)

System Center Operations Manager (SCOM)

OpenStack

Isilon

ScaleIO

VNX File eNAS

VNX Block l l l l

VMAX

VNXe

VPLEX

XtremIO l

RecoverPoint for Virtual Machines

See the EMC Storage Analytics Release Notes for a list of the supported models for each adapter instance and related OEs.

If the vCenter adapter instance is not configured, other adapter instances will function normally but will not display visible connections between the VMware objects and the array objects.

Note

The ESA space on the EMC Community Network provides more information about installing and configuring EMC Storage Analytics.

After adapter instances are created, the vRealize Operations Manager Collector requires several minutes to collect statistics, depending on the size of the storage array. Large storage array configurations require up to 45 minutes to collect metrics and resources and update dashboards. This is a one-time event; future statistical collections run quickly.

Adding an EMC Adapter instance for vCenter

To view health trees for the storage environment from the virtual environment, you must install an EMC Adapter instance for vCenter. All storage system adapter instances require the EMC Adapter instance for vCenter, which you must add first. A separate instance is required for each vCenter monitored by the vRealize Operations Manager environment.

Procedure

1. In a web browser, type:

https://<vROps_ip_address>/vcops-web-ent

to start the vRealize Operations Manager custom user interface log in as an administrator.

Adapter instances

29

Installation and Licensing

2. Select Administration

>

Solutions

>

EMC Adapter, and then click the Configure icon.

The Manage Solution dialog box appears.

3. Click the Add icon to add a new adapter instance.

4. Configure the following Adapter Settings and Basic Settings:

Option

Display Name

Description

Any descriptive name, for example:

My vCenter

Description Optional description

Connection Type VMware vSphere

License (optional) Not applicable (must be blank) for EMC Adapter instance for vCenter

Management IP IP address of the vCenter server

Array ID (optional) Not applicable (must be blank) for VMware vSphere connection type

5. In the Credential field, select any previously defined credentials for this storage system; otherwise, click the Add New icon (+) and configure these settings:

Option Description

Credential name Any descriptive name, for example:

My VMware Credentials

Username Username that EMC Storage Analytics uses to connect to the

VMware vRealize system

Note: If a domain user is used, the format for the username is

DOMAIN\USERNAME.

Password Password for the EMC Storage Analytics username

6. Click OK.

7. Configure the Advanced Settings, if they are required:

Collector vRealize Operations Manager Collector

Log Level Configure log levels for each adapter instance. The four levels for logging information are ERROR, WARN, INFO, and DEBUG .

ERROR

Logs only error conditions and provides the least amount of logging information.

WARN

Logs information when an operation is completed successfully but issues exist with the operation.

INFO

Logs information about workflow and describes how an operation occurs.

30

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

DEBUG

Logs all details related to an operation. If logging is set to DEBUG, all other levels of logging information are displayed in the log file.

TRACE

Provides the most detailed information and context to understand the steps leading up to errors and warnings.

The Manage Solution dialog box appears.

8. To test the adapter instance, click Test Connection.

If the connection is correctly configured, a confirmation box appears.

9. Click OK.

The new adapter instance polls for data every 5 minutes by default. At every interval, the adapter instance will collect information about the VMware vSphere datastore and virtual machines with Raw Device Mapping (RDM). Consumers of the registered

VMware service can access the mapping information.

Note

To edit the polling interval, select Administration

>

Environment Overview

>

EMC

Adapter Instance. Select the EMC Adapter instance you want to edit, and click the Edit

Object icon.

Configuring the vCenter Adapter

After the vCenter Adapter is installed, use the following procedure to configure it manually.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

In a web browser, type

https://vROps_ip_address/vcops-webent

and type the password.

2. Select Administration > Solutions.

3. In the solutions list, select VMware vSphere > vCenter Adapter, and click the

Configure icon.

The Manage Solution dialog box appears.

4. Click the Add icon.

5. In the Manage Solution dialog box, provide values for the following parameters: l l

Under Adapter Settings, type a name and optional description.

Under Basic Settings: n n

For vCenter Server, type the vCenter IP address.

For Credential, either select a previously defined credential or click the Add icon to add a new credential.

For a new credential, in the Manage Credential dialog box, type a descriptive name and the username and password for the vRealize system. If you use a domain username, the format is DOMAIN\USERNAME. Optionally, you can edit

Configuring the vCenter Adapter

31

Installation and Licensing l the credential using the Manage Credential dialog box. Click OK to close the dialog box.

Optionally, configure the Advanced Settings: n n n n

Collector: The vRealize Operations Manager Collector

Auto Discovery: True or False

Process Change Events: True or False

Registration user: The registration username used to collect data from vCenter

Server.

n

Registration password: The registration password used to collect data from vCenter Server

6. Click Test Connection.

7. Click OK in the confirmation dialog box.

8. Click Save Settings to save the adapter.

9. Click Yes to force the registration.

10. Click Next to go through a list of questions to create a new default policy if required.

Adding an EMC Adapter instance for SCOM

The SCOM adapter collects the resource and topology information for the computers and virtual machines in a Microsoft Hyper-V environment. To view relationships between VNX resources and resources that are collected by the SCOM adapter, configure an EMC

Adapter instance for SCOM.

Before you begin l l l

Install Hyper-V Management Pack Extensions 2012/2012 R2 in SCOM. The installation binary is available at https://hypervmpe2012.codeplex.com

.

Install the Management Pack for EMC storage systems (EMC Adapter) on vR Ops.

Install the Management Pack for SCOM on vR Ops.

l l

Download the Hyper-V-enabled VNX Topology Dashboard from ESA Dashboard

Exchange . Import the dashboard into vRealize Operations Manager.

Add your SCOM adapter instance in VMware SCOM MP.

Procedure

1. Open the EMC Adapter configuration dialog box.

2. For Mangement IP, type the IP address in this format: <IP address of the SCOM server> .< <Port of the SQL Server instance that stores the SCOM data (default is 1433)> /

<database name of the SQL Server that stores the SCOM data (default is

OperationsManager)> .

For example, 10.0.0.1:1433/OperationsManager

3. For Collector Type, type

Microsoft SCOM

.

4. For Credential, type the username and password to connect to SQL Server.

For Windows authentication, provide the domain name, for example <domain_name>

\ <administrator_username> .

5. Click Test Connection. If a "Driver not found" error appears, try again.

6. Verify that the connection is successful and click Save.

32

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

The SCOM adapter instance is added.

Adding an EMC Adapter instance for OpenStack

The OpenStack adapter collects the compute, storage, and network infrastructure information in the OpenStack environment. To view relationships between VNX resources and OpenStack resources, configure an EMC Adapter instance for OpenStack.

Before you begin l l l

Install the Management Pack for EMC storage systems (EMC Adapter) on vR Ops.

Install the vR Ops Management pack for OpenStack.

Configure a VMware OpenStack Adapter instance.

Procedure

1. In the configuration dialog box for the EMC Adapter, provide the following information: l

Management IP: Type the url of the OpenStack Endpoint in this format:

[ protocol

://][

IP_address

][: port

]. Protocol can be

http

or

https

. The protocol defaults to http if omitted. The port defaults to 5000 if omitted.

For example:

192.168.1.2

defaults to

http://192.168.1.2:5000

l l

Connection Type: Select OpenStack.

Credential: Type the user name and password used to connect to the OpenStack

Endpoint. The username format is tenant:username . Tenant defaults to admin if omitted.

2. Click Test Connection.

3. If a Review and Accept Certificate dialog box appears, review and click OK to accept the certificate.

4. Verify that the connection test is successful and click Save.

Results

The OpenStack adapter instance is added.

Adding an EMC Adapter instance for OpenStack

33

Installation and Licensing

Adding EMC Adapter instances for your storage system

Before you begin l l

Install the EMC Adapter for vCenter.

Obtain the adapter license key for your storage system.

Each storage system requires an adapter instance. All storage system adapter instances require the EMC Adapter instance for vCenter. Add the EMC Adapter instance for vCenter first. Then add the adapter instances for each storage system. Adapter instances are licensed per array. Observe these exceptions and requirements: l l l l l l

A VNX Unified array can use the same license for VNX File and VNX Block.

For VNX Block, to avoid a certificate error in case the main storage processor is down, test both storage processors for the VNX Block system to accept both certificates.

Global Scope is required for VNX Block access.

For VPLEX Metro, add an adapter instance for only one of the clusters (either one); this action enables you to monitor both clusters with a single adapter instance.

For RecoverPoint for Virtual Machines, get the RecoverPoint model that is required for the license.

When adding a vVNX adapter instance, a license is not required.

Procedure

1. In a web browser, type:

https://<vROps_ip_address>/vcops-web-ent

to start the vRealize Operations Manager custom user interface and log in as an administrator.

2. Select Administration

>

Solutions

>

EMC Adapter and click the Configure icon.

The Manage Solution dialog box appears.

3. Click the Add icon to add a new adapter instance.

4. Configure the following Adapter Settings and Basic Settings:

Display Name

Description

License

(optional)

A descriptive name, such as

My Storage System

or the array ID

Optional description with more details

License key required for the array that you want to monitor

(The license key for the adapter instance appears on the Right to

Use Certificate that is delivered to you or through electronic licensing, depending on your order.)

34

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

5. Configure these settings based on the adapter instance for your product:

Supported product

Isilon arrays

Field:

Connection

Type

Isilon

Field: Management IP Field: Array ID

(optional) l

If SmartConnect Zone is configured, use the

SmartConnect zone name or

IP address.

Adding EMC Adapter instances for your storage system

35

Installation and Licensing

Supported product

Field:

Connection

Type

Field: Management IP Field: Array ID

(optional)

ScaleIO arrays ScaleIO

VNX Block arrays VNX Block l

Use any node IP address.

Use the IP address and port of the ScaleIO Gateway.

Use the IP address of one

Storage Processor (SP) in a single array. Do not add an adapter instance for each SP.

Use the IP address of the primary Control Station.

Not applicable

Not applicable

VNX File and

Unified models,VG2 and

VG8 gateway models eNAS

VNX File eNAS

Not applicable

VMAX3 and

VMAX families

VNXe3200

VPLEX Local or

VPLEX Metro vVNX

XtremIO

VMAX

VNXe

VPLEX

VNXe

XtremIO

RecoverPoint for

Virtual Machines

RecoverPoint for Virtual

Machines

Use the IP address of the primary Control station.

Use the IPv4 or IPv6 address of the configured EMC SMI-S

Provider (see note).

Use the IP address of the management server.

Use the IP address of the management server. For a Metro cluster, use the IP address of either management server, but not both.

Use the IP address of the management server.

Use the IP address of the XMS that manages the XtremIO target cluster.

Use the IP address of the virtual

RecoverPoint appliance.

Not applicable

Required

Not applicable

Not applicable

Not applicable

Use the serial number of the

XtremIO target cluster.

Not applicable

Note

When adding a VMAX adapter instance, EMC Storage Analytics will use a secure connection to the EMC SMI-S Provider by using HTTPS port 5989 by default. This

requires a certificate import (see Configuring a secure connection to a VMAX

Adapter on page 27 ). To bypass certificate authentication and use HTTP instead of

HTTPS, use <SMI-S IP Address> :5988 in the Management IP field.

36

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Note

When adding a VNX File adapter instance, a license is required for the VNX File system.

Note

When adding an eNAS adapter instance, a license is not required.

6. In the Credential field, select any previously defined credentials for this storage system; otherwise, click the Add New icon and configure these settings:

Field

Credential name

Value to enter

A name for the credentials information.

Username Username that EMC Storage Analytics uses to connect to the storage system.

l l l l l l l l l

For Isilon, use the credentials of the OneFS storage administration server.

For ScaleIO, use the credentials of the ScaleIO Gateway.

For VNX File or eNAS, use the credentials of the Control Station.

For VNX Block, use the credentials of the Storage Processor.

For VMAX, use the credentials of an ECOM user with monitor or administrator privileges. The default user/password combination is admin/#1Password.

For VNXe, use the credentials of the management server.

For VPLEX, use the credentials of the management server (for example, the service user). The default credentials are service/

Mi@Dim7T.

For XtremIO, use the XMS username.

For RecoverPoint for Virtual Machines, use the credentials of the virtual RecoverPoint appliance.

Password Storage system management password.

7. Click OK.

The Manage Solution dialog reappears.

Adding EMC Adapter instances for your storage system

37

Installation and Licensing

8. If required, configure the following Advanced Settings:

Collector Automatically select collector

Log Level Configure log levels for each adapter instance. The four levels for logging information are ERROR, WARN, INFO, and DEBUG .

ERROR

Logs only error conditions and provides the least amount of logging information.

WARN

Logs information when an operation is completed successfully but issues exist with the operation.

INFO

Logs information about workflow and describes how an operation occurs.

DEBUG

Logs all details related to an operation. If logging is set to DEBUG, all other levels of logging information are displayed in the log file.

TRACE

Provides the most detailed information and context to understand the steps leading up to errors and warnings.

The Manage Solution dialog box appears.

9. Click Test Connection to validate the values you entered.

If the adapter instance is correctly configured, a confirmation box appears.

Note

Testing an adapter instance validates the values you entered. Failure to do this step causes the adapter instance to change to the (red) warning state if you enter invalid values and do not validate them.

10. To finish adding the adapter instance, click OK.

38

EMC Storage Analytics 3.4 Installation and User Guide

Installation and Licensing

Editing EMC Adapter instances for your storage system

Before you begin l l l

Install the EMC Adapter

Configure the EMC Adapter instance for your storage system

Obtain an adapter license key for your storage system

The EMC Adapter instances for storage systems require licenses. Adapter instances are licensed per storage array. A VNX Unified array can use the same license for VNX File and

VNX Block.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

For example in a web browser, type:

https://<vROps_ip_address>/vcops-webent

.

2. Select Administration

>

Inventory Explorer

>

EMC Adapter Instance .

3. Select the EMC adapter you want to edit and click the Edit Object icon.

The Edit Object dialog appears.

4. Edit the fields you need to change. See Adding EMC Adapter instances for your storage system on page 34 for field descriptions.

5. Click Test Connection to verify the connection.

6. To finish editing the adapter instance, click OK.

Editing EMC Adapter instances for your storage system

39

CHAPTER 3

EMC Storage Analytics Dashboards

This chapter contains the following topics: l l

Topology mapping

.................................................................................................42

EMC dashboards

................................................................................................... 58

EMC Storage Analytics Dashboards

41

EMC Storage Analytics Dashboards

Topology mapping

Topology mapping is viewed and traversed graphically using vRealize Operations

Manager health trees. The dashboards developed for EMC Storage Analytics utilize topology mapping to display resources and metrics.

EMC Storage Analytics establishes mappings between: l l

Storage system components

Storage system objects and vCenter objects

Topology mapping enables health scores and alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects, such as

LUNs, datastores, and virtual machines. Topology mapping between storage system objects and vCenter objects uses a vCenter adapter instance.

42

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

Isilon topology

EMC Storage Analytics implements the following topology for Isilon.

Isilon topology

43

EMC Storage Analytics Dashboards

Figure 2 Isilon topology

NFS export

VMware datastore

Adapter instance

Access zone

SMB share

Cluster

Tier

Node pool

Node

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

44

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

ScaleIO topology

EMC Storage Analytics implements the following topology for ScaleIO.

Figure 3 ScaleIO topology

MDM

Cluster

MDM

VMware

Datastore

System

SDC

Volume

Storage

Pool

Snapshot

Protection

Domain

SDS

Device

Fault Set

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

ScaleIO topology

45

EMC Storage Analytics Dashboards

VNX Block topology

EMC Storage Analytics implements the following topology for VNX Block.

Figure 4 VNX Block topology

SP A or B

Array Instance

Datastore

Virtual

Machine

LUN

Physical

Host

Storage

Pool

Non-ESX

VM

RAID

Group

Fast

Cache

SP Front

End Port

HyperV VM

Non-ESX Host

System Server

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

Disk

Tier

46

EMC Storage Analytics 3.4 Installation and User Guide

VNX File/eNAS topology

EMC Storage Analytics implements the following topology for VNX File and eNAS.

Figure 5 VNX File/eNAS topology

Datastore

VDM Array Instance

EMC Storage Analytics Dashboards

NFS Export

Data Mover

(standby)

Data Mover

File System

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

File Pool

Disk Volume

VNX Block LUNs,

VMAX3 Devices,

XtremIO Volumes or

Snapshots

VNX File/eNAS topology

47

EMC Storage Analytics Dashboards

VMAX topology

EMC Storage Analytics implements the following topology for VMAX.

Figure 6 VMAX topology

48

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VMAX3 topology

EMC Storage Analytics implements the following topology for VMAX3.

Figure 7 VMAX3 topology

VMAX3 Array

Service Level

Objectives

VMware

Datastore

Storage

Resource

Pool

Virtual

Machine

Storage Group

Device

Front-End Director eNAS

Disk

Volume

Front-End

Port

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object or eNAS Object

Entity can be cascaded

Device

Remote

Replica

Group

SRDF Director

SRDF Port

R1

(source)

Device

R2

(target)

VMAX3 topology

49

EMC Storage Analytics Dashboards

VMAX and VMAX3 topology rules

The following rules govern how objects are displayed in the VMAX topology dashboard and which metrics are collected for them: l l l l l l l vRealize Operations Manager does not display devices that are unmapped and unbound.

vRealize Operations Manager does not display devices that are mapped and bound but unused by VMware, VNX, eNAS, or VPLEX. Performance metrics for these devices are aggregated into the parent Storage Group performance metrics.

If the corresponding EMC vSphere adapter instance is running on the same vRealize

Operations Manager appliance, then the vRealize Operations Manager displays devices that are mapped, bound, and used by VMware datastores or RDMs.

For supported models of VNX File Gateway systems, if the corresponding EMC VNX

File or eNAS adapter instance is running on the same vRealize Operations Manager appliance, then the vRealize Operations Manager displays devices that are mapped, bound, and used by VNX File or eNAS Disk Volumes.

A VMAX device is displayed when the corresponding VPLEX adapter instance is added.

vRealize Operations Manager does not display Storage Groups with unmapped and unbound devices.

vRealize Operations Manager displays Storage Groups that contain mapped and bound devices, and their metrics are aggregates of the member devices.

50

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VNXe topology

EMC Storage Analytics implements the following topology for VNXe.

Figure 8 VNXe topology

VMware NFS

Datastore

NFS Export

Fast Cache

Tier

Disk

Storage

Pool

File System

EMC adapter instance

NAS Server

Storage

Processor

LUN Group

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

LUN

VMware

VMFS

Datastore

VNXe topology

51

EMC Storage Analytics Dashboards

VPLEX Local topology

EMC Storage Analytics implements the following topology for VPLEX Local.

Figure 9 VPLEX Local topology

Cluster

Engine

Storage

View

FC Port

Director

VMware

Datastore

Virtual

Volume

Device

XtremIO

Cluster

Extent

Virtual

Machine

Ethernet

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Port

Storage

Array

Storage

Volume

VNX, VNXe, or VMAX

Adapter Instance

52

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VPLEX Metro topology

EMC Storage Analytics implements the following topology for VPLEX Metro.

VPLEX Metro topology

53

Figure 10 VPLEX Metro topology

Engine Cluster-1

Ethernet

Port

Director Storage View

FC Port

Device

Virtual

Extent

Virtual Machine

XtremIO

Cluster

VPLEX Metro

Local

FC Port

Local

FC Port

VMware

Datastore

Local

Storage View

Local

Device

Distributed

Volume

Distributed

Device

Local

Storage View

Local

Device

VMware

Datastore

Cluster-2

Engine

Storage View

Director

Ethernet

Port

FC Port

Virtual

Device

Local

Extent

Local

Extent

Virtual Machine

XtremIO

Cluster

Extent

Storage

Volume

Storage

Array

VNX, VNXe, or VMAX

Adapter Instance

Local

Storage

Volume

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Local

Storage

Volume

Storage

Array

Storage

Volume

VNX, VNXe, or VMAX

Adapter Instance

EMC Storage Analytics Dashboards

vVNX topology

EMC Storage Analytics implements the following topology for vVNX.

Figure 11 vVNX topology

NFS Export Fast Cache

Disk

VMware NFS

Datastore

File System

Storage

Pool

Tier

NAS Server

Virtual

Disk

Storage

Processor

LUN Group

LUN

VMware

VMFS

Datastore

EMC adapter instance

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

VVol

VMware VM

Storage

Container

VVol

Datastore vVNX topology

55

EMC Storage Analytics Dashboards

XtremIO topology

XtremIO implements the following topology for XtremIO.

Figure 12 XtremIO topology

Adapter

Instance

Data

Protection

Group

SSD

Cluster

X-Brick

Volume

VMware

Datastore

Virtual

Machine

Storage

Controller

Snapshot

Key: Relationships to EMC Objects

Arrowhead points to parent

Relationship to VMware Object

Entity can be cascaded

56

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

RecoverPoint for Virtual Machines topology

EMC Storage Analytics implements the following topology for RecoverPoint for Virtual

Machines.

Figure 13 RecoverPoint for Virtual Machines topology

Repository

Volume

RecoverPoint

System

Journal

Volume

Cluster vRPA

Consistency

Group

Link

Replication

Set

Splitter

Virtual

Machine

Copy

Cluster

Compute

Resource

User

Volume

Virtual

Machine

RecoverPoint for Virtual Machines topology

57

EMC Storage Analytics Dashboards

EMC dashboards

Use dashboards to view metrics.

The standard dashboards are delivered as templates. If a dashboard is accidentally deleted or changed, you can generate a new one.

Table 2 on page 58 lists the EMC

dashboards available for each EMC product.

Table 2 Dashboard-to-product matrix

Dashboard name

Isilon

X

X

X

ScaleIO

X

X

X

VNX

X

X

X

VNXe

X

X

X

VMAX

X

X

X

VPLEX

X

X

X

XtremIO RecoverPoint for Virtual

Machines

X

X

X

X

X

X

Storage Topology

Storage Metrics

<product_name>

Overview

<product_name>

Topology

<product_name>

Metrics

Top-N

<product_name>

<product_name>

Performance

<product_name>

Communication

X

X

X

---

---

X

X

---

---

X

X

X

---

X

X

X

---

X

X

X

---

X

---

---

X

X

---

X

X

X

---

X

X

X

You can use the standard vRealize Operations Manager dashboard customization features to create additional dashboards that are based on your site requirements (some restrictions may apply).

Note

eNAS dashboards are available on the Dashboard XChange.

Dashboard XChange on page

80

has more information.

Storage Topology dashboard

The Storage Topology dashboard provides an entry point for viewing resources and relationships between storage and virtual infrastructure objects.

Click the Storage Topology tab. Details for every object in every widget are available by selecting the object and clicking the Object Detail icon at the top of the widget.

The Storage Topology dashboard contains the following widgets:

Storage System Selector

This Resource widget filters the EMC Adapter instances that are found in each storage system. To populate the Storage Topology and Health widget, select an instance name.

58

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

Storage Topology and Health

This Health Tree widget provides a navigable visualization of resources and virtual infrastructure resources. Single-click to select resources, or double-click to change the navigation focus. To populate the Parent Resources and Child Resources widgets, select a resource in this widget.

Parent resources

This widget lists the parent resources of the resource selected in the Storage

Topology and Health widget.

Child resources

This widget lists the child resources of the resource selected in the Storage Topology and Health widget.

Storage Metrics dashboard

Click the Storage Metrics tab to display resource and metrics for storage systems and view graphs of resource metrics.

The Storage Metrics dashboard contains the following widgets:

Storage System Selector

This Resource widget lists all configured EMC Adapter instances. Select an instance name to populate the Resource Selector widget.

Resource Selector

This Health Tree widget lists each resource associated with the adapter instance selected in the Storage System Selector. Select a resource to populate the Metric

Picker widget.

Metric Picker

This widget lists all the metrics that are collected for the resource selected in the

Resource Selector widget. You can use the search feature of this widget to locate specific objects. Double-click a metric to create a graph of the metric in the Metric

Graph widget.

Metric Graph

This widget graphs the metrics selected in the Metric Picker widget. It enables you to display multiple metrics simultaneously in a single graph or in multiple graphs.

Isilon Overview dashboard

Click the Isilon tab to view the scoreboards that provide a single view of metrics for selected Isilon resources with configured adapter instances. Scoreboards group the contents by adapter instance.

The Isilon dashboard displays the following scoreboards. For each scoreboard and selected metric, the configured Isilon adapter is shown.

CPU Performance (% used)

This widget represents the percentage of CPU in use.

l

Green indicates that 0% of the CPU is in use.

l

Red indicates that 100% of the CPU is in use.

Overall Cache Hit Rate

This widget shows the percentage of data requests that returned cache hits.

Storage Metrics dashboard

59

EMC Storage Analytics Dashboards

Remaining Capacity (%)

This widget shows the remaining capacity as a percentage.

l l

Green indicates that more than 10% of the capacity is available.

Red indicates that 10% or less of the capacity is available.

Disk Operations Latency

This widget shows pending disk operations latency for clusters and nodes.

l

Green indicates 0 to 20 milliseconds.

l

Red indicates greater than 20 milliseconds.

Number of Active Clients Per Node l

Green indicates 0 active clients.

l

Red indicates 1,500 active clients.

ScaleIO Overview dashboard

Click the ScaleIO Overview tab to view a collection of heat maps that provide a single view of the capacity for selected ScaleIO resources with configured adapter instances.

Heat maps on this dashboard group the contents by adapter instance.

The ScaleIO dashboard displays the following heat maps. For each heat map and selected metric, the configured ScaleIO adapter is shown.

ScaleIO System

This heat map displays the In Use Capacity metric. The color of the heat map entries ranges from green to red and corresponds to the In Use Capacity as follows: l

Green indicates that 0 GB of data capacity is allocated.

l l

Yellow indicates that 500 GB of data capacity is allocated.

Red indicates that 1000 GB of data capacity is allocated.

ScaleIO Storage Pool

This heat map displays the In Use Capacity metric for each ScaleIO Storage Pool grouped by ScaleIO System. The color of the heat map entries ranges from green to red and corresponds to the In Use Capacity as follows: l

Green indicates that 0 GB of data capacity is allocated.

l l

Yellow indicates that 500 GB of data capacity is allocated.

Red indicates that 1000 GB or more of data capacity is allocated.

ScaleIO Device

This heat map displays the In Use Capacity metric for each ScaleIO Device grouped by ScaleIO System and SDS associated with. The color of the heat map entries ranges from green to red and corresponds to the In Use Capacity as follows: l

Green indicates that 0 GB of data capacity is allocated.

l l

Yellow indicates that 500 GB of data capacity is allocated.

Red indicates that 1000 GB of data capacity is allocated.

60

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

ScaleIO Protection Domain

This heat map displays the In Use Capacity metric for each ScaleIO Protection

Domain grouped by ScaleIO System. The color of the heat map entries ranges from light blue to dark blue and corresponds to the In Use Capacity as follows: l

Light blue indicates that 0 GB of data capacity is allocated.

l

Dark blue indicates that 1000 GB or more of data capacity is allocated.

SDS

This heat map displays the In Use Capacity metric for each SDS grouped by ScaleIO

System and Protection Domain. The color of the heat map entries ranges from light blue to dark blue and corresponds to the In Use Capacity as follows: l

Light blue indicates that 0 GB of data capacity is allocated.

l

Dark blue indicates that 1000 GB or more of data capacity is allocated.

Fault Set

This heat map displays the In Health% metric for each Fault Set. The color of the heat map entries ranges from light blue to dark blue and corresponds to the In Use

Capacity as follows: l

Light blue indicates 0% health.

l

Dark blue indicates 100% health.

ScaleIO Overview dashboard

61

EMC Storage Analytics Dashboards

VNX Overview dashboard

Click the VNX Overview tab to view a collection of heat maps that provide a single view of the performance and capacity for all VNX resources with configured adapter instances.

Heat maps on this dashboard group the contents by adapter instance.

The VNX Overview dashboard displays the following heat maps:

CPU performance

This heat map displays the CPU utilization of each Storage Processor and Data

Mover on each configured adapter instance. The color of the heat map entries shows percentage busy: l

Green indicates 0% busy.

l

Red indicates 100% busy.

FAST cache performance

This heat map has two modes: Read Cache Hit Ratio and Write Cache Hit Ratio. To select the mode, use the Configuration menu. The Read/Write Cache Hit Ratio (%) is the number of FAST Cache read or write hits divided by the total number of read or write I/Os across all RG LUNs and Pools configured to use FAST Cache. The color of the heat map entries shows hit ratios: l

Green indicates a high FAST Cache hit ratio.

l l

Red indicates a low FAST Cache hit ratio. A low value on an idle array is acceptable.

Gray indicates that there is no FAST Cache present on the VNX systems identified by the adapter instances and a Heat Map not configured message appears with the heat map.

Pool capacity

This heat map has four modes: RAID Group Available Capacity, Storage Pool Capacity

Utilization, Storage Pool Available Capacity, and File Pool Available Capacity.

In Capacity Utilization mode, the color of the heat map entries shows the value of the percentage full metric for all non-RAID Group storage pools: l

Green indicates 0% full.

l

Red indicates 100% full.

In Available Capacity mode, the color of the heap map entries shows the value of the

Available Capacity (GB) metric: l l

Green indicates the largest available capacity on any storage pool for any of the configured adapter instances.

Red indicates 0 GB available.

LUN and file system performance

This heat map has several modes.

In LUN Utilization mode, the color of the heat map entries show the percentage busy metric for all LUNs grouped by adapter instance: l

Green indicates 0% busy.

62

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards l

Red indicates 100% busy.

In LUN Latency mode, the color of the heat map entries shows the value of the

Latency (ms) metric: l

Green indicates 0 ms latency.

l

Red indicates 20 ms or greater latency and is configurable.

Latency values appear for RAID Group LUNs. Pool LUNS appear in white with no latency values reported.

In LUN Read IOPs mode, the color of the heat map entries shows the relative number of read I/O operations per second serviced by the LUN. The color ranges from light green to dark green. Dark green indicates the highest number of read I/O operations per second serviced by any LUN listed in the heat map.

In LUN Write IOPS mode, the color of the heat map entries shows the relative number of write I/O operations per second serviced by the LUN. The color ranges from light green to dark green. Dark green indicates the highest number of write I/O operations per second serviced by any LUN listed in the heat map.

In File System Read IOPs mode, the color of the heat map shows the relative number of read I/O operations per second serviced by the file system. The color ranges from light green to dark green. Dark green indicates the highest number of read I/O operations per second serviced by any file system listed in the heat map.

In File System Write IOPS mode, the color of the heat map entries shows the relative number of write I/O operations per second serviced by the file system. The color ranges from light green to dark green. Dark green indicates the highest number of write I/O operations per second serviced by any file system listed in the heat map.

VNX Overview dashboard

63

EMC Storage Analytics Dashboards

VMAX Overview dashboard

Click the VMAX Overview tab to view a collection of heat maps that provide a single view of the performance and capacity for all VMAX resources with configured adapter instances. Heat maps on this dashboard group the contents by adapter instance.

The VMAX dashboard displays the following heat maps for all applicable VMAX models.

For each heat map and selected metric, the configured VMAX adapter is shown:

Thin Pool Usage

This heat map displays the Percent Allocated metric. Percent Allocated displays the allocated capacity in each thin pool. The color of the heat map entries ranges from green to red and corresponds to the percent allocated as follows: l

Green indicates that 0% of the thin pool is allocated.

l l

Yellow indicates that 50% of the thin pool is allocated.

Red indicates that 100% of the thin pool is allocated.

Note

This heat map will show no data for VMAX3 arrays because you cannot manipulate thin pools.

The VMAX dashboard displays the following heat maps for all supported VMAX models.

For each heat map and selected metric, the configured VMAX adapter is shown:

Storage Group

This heat map has four modes: Total Reads (IOPS), Total Writes (IOPS), Read Latency

(ms), and Write Latency (ms).

Total Reads and Writes represent the aggregate reads or writes for all LUNs in the storage group. Write and Read Latency is the average write or read latency of all

LUNs in the storage group.

For Total Reads and Writes, the color of the heat map entries shows the relative number of total reads or writes across all the storage groups. The color ranges from light blue to dark blue. Dark blue indicates the storage group(s) with the highest number of total reads or writes while light blue indicates the lowest. Because the range of values for total reads or writes has no lower or upper limits, the numerical difference between light and dark blue may be very small.

For Write and Read Latency, the color of the heat map entries is based on a scale of latency from 0 to 40ms. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases,

EMC recommends adjusting the scale appropriately. The color of the heat map entries ranges from green to red as follows: l

Green indicates a latency of 0ms.

l l

Yellow indicates a latency of 20ms.

Red indicates a latency of 40ms.

LUN Performance

This heat map has four modes: Reads (IOPS), Writes (IOPS), Read Latency (ms), and

Write Latency (ms).

64

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

Reads and Writes represent the total reads or writes for a particular LUN. Read and

Write Latency is the average read or write latency of all LUNs in the storage group.

For Reads and Writes, the color of the heat map entries shows the relative number of reads or writes across all the LUNs. The color ranges from light blue to dark blue.

Dark blue indicates the LUN(s) with the highest number of reads or writes while light blue indicates the lowest. Because the range of values for reads or writes has no lower or upper limits, the numerical difference between light and dark blue may be very small.

For Read and Write Latency, the color of the heat map entries ranges from green to red and is based on a scale of latency from 0 to 40ms. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases EMC recommends adjusting the scale appropriately. The color of the heat map entries range from green to red as follows: l

Green indicates a latency of 0ms.

l l

Yellow indicates a latency of 20ms.

Red indicates a latency of 40ms.

Front End Director

This heat map has two modes: Total Bandwidth (MB/s) and Total Operations (IOPS).

Total Bandwidth is the cumulative amount of data transferred over all ports of the front-end director (measured in Megabytes per second). Total Operations is the total number of operations taking place over all ports of a front-end director (measured in

IOs per second).

The color of the heat map entries is the same for both metrics. It shows the relative total bandwidth or relative total number of operations, depending on the selected metric. The color ranges from light blue to dark blue. Dark blue indicates the frontend director(s) with the highest number of total operations or the greatest total bandwidth, depending on the selected metric. Light blue indicates the lowest number of operations or the least total bandwidth. Because the range of values for operations or bandwidth has no lower or upper limits, the numerical difference between light and dark blue may be very small.

SRDF Director

This heat map has two modes: Total Bandwidth (MB/s) and Total Writes (IOPS).

Total Bandwidth is the cumulative amount of data transferred over an SRDF director

(measured in Megabytes per second). Total Writes is the total number of writes over an SRDF director (measured in IOs per second).

The color of the heat map entries is the same for both metrics. It shows the relative total bandwidth or relative total number of writes, depending on the selected metric.

The color ranges from light blue to dark blue. Dark blue indicates the SRDF director(s) with the highest number of total writes or the greatest total bandwidth, depending on the selected metric. Light blue indicates the lowest number of writes or the least total bandwidth. Because the range of values for bandwidth or writes has no lower or upper limits, the numerical difference between light and dark blue may be very small.

SRDF Groups

This heat map has four modes: Devices in Session (count), Average Cycle Time

(seconds), Writes (IOPS), and Writes (MB/s).

Devices in Session represents the number of devices in an SRDF session in the SRDF group. The Average Cycle Time is an SRDF/A metric that provides the average

VMAX Overview dashboard

65

EMC Storage Analytics Dashboards elapsed time between data transfer cycles. Writes (IOPs) represents the number of writes per second on the devices in the SRDF group. Writes (MB/s) represents the number of megabytes per second sent from the SRDF group.

The color of the heat map entries is the same for all metrics. It shows the relative devices in session, average cycle time, total bandwidth, or the relative number of writes, depending on the selected metric. The color ranges from light blue to dark blue. Dark blue indicates the SRDF group(s) with the highest number of one these metrics and light blue indicates the lowest number of one of these metrics. Because the range of values has no lower or upper limits, the numerical difference between light and dark blue may be very small.

Note

ESA supports only two-site SRDF configurations. Additionally, SRDF groups that have no

R1 or R2 devices associated with them are not displayed.

66

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VNXe Overview dashboard

Click the VNXe Overview tab to view a collection of heat maps that provide a single view of the performance and capacity for all VNXe resources with configured adapter instances. Heat maps on this dashboard group the contents by adapter instance.

The VNXe Overview dashboard displays the following heat maps:

CPU Performance

This heat map displays the CPU Utilization, such as % busy, of each Storage

Processor on each configured adapter instance. The color of the heat map entries shows % busy: l

Green indicates 0% busy l

Red indicates 100% busy

Pool capacity

This heat map has two modes: Storage Pool Capacity Utilization and Storage Pool

Available Capacity.

In Capacity Utilization mode, the color of the heat map entries shows the value of the % full metric for all storage pools: l

Green indicates 0% full.

l

Red indicates 100% full.

In Available Capacity mode, the color of the heap map entries shows the value of the

Available Capacity (GB) metric: l l

Green indicates the largest available capacity on any storage pool for any of the configured adapter instances.

Red indicates 0 GB available.

LUN Performance

This heat map has two modes: LUN Read IOPS and LUN Write IOPS.

LUN Read IOPS and LUN Write IOPS represent the total reads or writes for a particular

LUN. The color of the heat map entries shows the relative number of reads or writes across all the LUNs. The color ranges from light green to dark green. Dark green indicates the LUN(s) with the highest number of reads or writes while light green indicates the lowest. Because the range of values for reads or writes has no lower or upper limits, the numerical difference between light and dark green may be very small.

VNXe Overview dashboard

67

EMC Storage Analytics Dashboards

VPLEX Overview dashboard

Click the VPLEX Overview tab to view a collection of scorecard widgets that provide an overview of the health for the VPLEX system.

The EMC VPLEX Overview dashboard displays the following widgets:

Note

Red, yellow, and orange colors correlate with the Health State or Operational Status of the object. Any Health State or Operational Status other than those listed below will show green (good). Also note that because VMware expects numeric values, you cannot modify these widgets.

CPU Health

This widget displays the CPU usage, as a percentage, for each director on the VPLEX.

The color of the directors in the widget reflects the CPU usage: l

Green indicates CPU usage of 0–75%.

l

Yellow indicates CPU usage of 75–85%.

l l

Orange indicates CPU usage of 85–95%.

Red indicates CPU usage of 95–100%.

Generally, a director should stay below 75% CPU usage. Correct an imbalance of CPU usage across directors by adjusting the amount of I/O to the busier directors; make this adjustment by modifying existing storage view configurations. Identify busier volumes and hosts and move them to less busy directors. Alternately, add more director ports to a storage view to create a better load balance across the available directors.

Memory Health

This widget displays the memory usage, as a percentage, of each director on the

VPLEX. The color of the directors in the widget reflects the memory usage: l

Green indicates memory usage of 0–70%.

l l l

Yellow indicates memory usage of 70–80%.

Orange indicates memory usage of 80–90%.

Red indicates memory usage of 90–100%.

Front-End Latency - Read/Write

This widget displays read and write latency (in ms) for each Front-end Director.

l

Green indicates latency values between 0 ms and 7 ms.

l l l

Yellow indicates latency values of 7 ms up to 11 ms.

Orange indicates latency values of 11 ms up to 15 ms.

Red indicates latency values over 15 ms.

Front-End Operations

This widget displays the active and total operations (in counts/s) for each Front-end

Director.

68

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VPLEX Performance dashboard

Click the VPLEX Metrics tab to view a collection of heat maps that provide a single view of the most important performance metrics for VPLEX resources.

The EMC VPLEX Metrics dashboard displays two types of heat maps: l l

Metrics with definitive measurements such as CPU usage (0–100%), response time latency (0–15 ms), or errors (0–5) are assigned color ranges from lowest (green) to highest (red).

Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Front-end Bandwidth

This heat map has three modes: Reads (MB/s), Writes (MB/s), and Active Operations

(Counts/s)

Reads and Writes represent the total reads or writes for the storage volumes across the front-end ports on a director.

For Reads and Writes, the color of the heat map entries shows the relative front-end bandwidth on a director, depending upon the selected metric.

Active Operations represents the number of active, outstanding I/O operations on the director's front-end ports.

Back-end Bandwidth

This heat map has three modes: Reads (MB/s), Writes (MB/s), and Operations

(Counts/s).

Reads and Writes represent the total reads or writes for the storage volumes across the back-end ports on a director.

For Reads and Writes, the color of the heat map entries shows the relative back-end bandwidth on a director, depending upon the selected metric.

Operations represents the number of I/O operations per second through the director's back-end ports.

Back-end Errors

This heat map has three modes: Resets (count/s), Timeouts (count/s), and Aborts

(count/s). Resets are LUN resets sent by VPLEX to a storage array LUN when it does not respond to I/O operations for over 20 seconds. Timeouts occur when an I/O from

VPLEX to a storage array LUN takes longer than 10 seconds to complete. Aborts occur when an I/O from VPLEX to a storage array LUN is cancelled in transit. Resets indicate more serious problems than timeouts and aborts.

The color of the heat map entries is based the number of errors on a scale from zero

(green) to five (red). Numbers of errors between zero and five are represented by shades of yellow to orange.

Front-end Latency

This heat map has three modes: Read Latency (ms), Write Latency (ms), and Queued

Operations (Counts/s).

Write and Read Latency is the average write or read latency for all virtual volumes across all front-end ports on a director.

For Read and Write Latency, the color of the heat map entries is based on a scale of latency from 0 to 15 ms, depending upon the selected metric. This scale is a based on average customer requirements and may not represent a customer's particular

VPLEX Performance dashboard

69

EMC Storage Analytics Dashboards requirements for latency. In such cases, EMC recommends adjusting the scale appropriately.

For VPLEX Metro systems consisting primarily of distributed devices, the WAN roundtrip time greatly affects the front-end write latency. See the COM Latency widgets and the WAN Link Usage widget in the VPLEX Communication dashboard.

Virtual Volumes Latency

This heat map has three modes: Read Latency (ms), Write Latency (ms), and Total

Reads & Writes (Counts/s).

Write and Read Latency is the average write or read latency for all virtual volumes on a director.

For Read and Write Latency, the color of the heat map entries is based on a scale of latency from 0 to 15 ms, depending on the selected metric. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases, EMC recommends adjusting the scale appropriately.

Total Reads & Writes represents the virtual volume total reads and writes per director.

Storage Volumes Latency

This heat map has two modes: Read Latency (ms) and Write Latency (ms).

Write and Read Latency is the average write or read latency for all storage volumes on a director.

For Read and Write Latency, the color of the heat map entries is based on a scale of latency from 0 to 15 ms, depending on the selected metric. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases, EMC recommends adjusting the scale appropriately.

70

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

VPLEX Communication dashboard

Click the VPLEX Communication tab to view a collection of heat maps that provide a single view of the performance of the communication links for a VPLEX configuration.

The EMC VPLEX Communication dashboard displays two types of heat maps: l

Metrics with definitive measurements such as intra-cluster local COM latency (0–15 ms) are assigned color ranges from lowest (green) to highest (red).

l

Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Cluster-1 COM Latency

This heat map has one mode: Avererage Latency (ms).

The Cluster-1 latency statistics represent the intra-cluster local COM latency, which occurs within the rack and is typically fast (less than 1 msec).

For COM Latency, the color of the heat map entries is based on a scale of latency from 0 to 15 ms, depending upon the selected metric. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases, EMC recommends adjusting the scale appropriately.

For VPLEX Metro, EMC recommends adjusting the scale based on your discovered

WAN round-trip time.

Cluster-2 COM Latency

This heat map has one mode: Avererage Latency (ms).

The Cluster-2 latency statistics represent the intra-cluster local COM latency, which occurs within the rack and is typically small (less than 1 msec).

For COM Latency, the color of the heat map entries is based on a scale of latency from 0 to 15ms, depending upon the selected metric. This scale is a based on average customer requirements and may not represent a customer's particular requirements for latency. In such cases, EMC recommends adjusting the scale appropriately.

For VPLEX Metro, EMC recommends adjusting the scale based on your discovered

WAN round-trip time.

WAN Link Usage (VPLEX Metro only)

This heat map has four modes: l

Distributed Device Bytes Received (MB/s) l l l

Distributed Device Bytes Sent (MB/s)

Distributed Device Rebuild Bytes Received (MB/s)

Distributed Device Rebuild Bytes Sent (MB/s)

The Distributed Device Bytes Received or Sent modes represent the total amount of traffic received or sent for all distributed devices on a director.

The Distributed Device Rebuild Bytes Received or Sent modes represent the total amount of rebuild/migration traffic received or sent for all distributed devices on a director.

VPLEX Communication dashboard

71

EMC Storage Analytics Dashboards

The color of the heat map entries shows the relative number of distributed device bytes transferred on a director, depending upon the selected metric.

XtremIO Overview dashboard

Click the XtremIO Overview tab to view a collection of scorecard widgets that provide an overview of the health for the XtremIO system.

The XtremIO Overview dashboard displays two types of heat maps: l l

Metrics with definitive measurements are assigned color ranges from lowest (green) to highest (red).

Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Cluster Data Reduction

This widget displays the Data Deduplication Ratio and Compression Ratio of each cluster. It also displays the Data Reduction Ratio, which is the result of the combined

Data Deduplication and Compression reduction on each cluster.

Note

Compression Ratio shows as blue if XtremIO version 2.4.1 is running.

Cluster Efficiency

This widget displays the Thin Provisioning Savings (%) and the Total Efficiency of each cluster.

Volume

This widget displays volumes in one of two modes: Total Capacity or Consumed

Capacity. Select a volume to display its sparkline charts.

Cluster

This widget displays, for each cluster, the Total Physical and Logical Capacity;

Available Physical and Logical Capacity; and Consumed Physical and Logical

Capacity.

Snapshot

This widget displays snapshots in one of two modes: Total Capacity or Consumed

Capacity. Select a snapshot to display its sparkline charts.

Data Reduction Ratio

As data enters the XtremIO system, in-line deduplication and compression reduce the amount of space needed to store the data. This widget provides a ratio showing the overall data reduction savings from both the data deduplication and data compression processes combined.

72

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

XtremIO Performance dashboard

The XtremIO Performance dashboard provides percent utilization of the Storage

Controller CPUs, key volume, and SSD metrics and sparklines.

The XtremIO Performance dashboard displays two types of heat maps: l

Metrics with definitive measurements such as CPU usage (0–100%) are assigned color ranges from lowest (green) to highest (red).

l

Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Storage Controllers CPU 1 Utilization (%)

This widget shows the percent utilization of CPU 1.

Storage Controllers CPU 2 Utilization (%)

This widget shows the percent utilization of CPU 2.

Volume

This widget provides five modes: Total Operations, Total Bandwidth, Total Latency,

Unaligned (%), and Average Block Size. Select a volume from this widget to display sparklines for it.

SSD

This widget provides two modes: Endurance Remaining and Disk Utilization. Select an SSD from this widget to display sparklines for it.

RecoverPoint for VMs Overview dashboard

Click the RecoverPoint for VMs Overview tab to view a collection of heat maps that provide a single view of the performance and capacity for all resources of RecoverPoint for Virtual Machines with configured adapter instances. Heat maps on this dashboard group the contents by adapter instance.

The RecoverPoint for VMs Overview dashboard displays heat maps for metrics with definitive measurements are assigned color ranges from lowest (green) to highest (red).

The threshold settings are as follows: l l l l l l l l

RecoverPoint for VMs System | Number of RecoverPoint Clusters (n/a)

RecoverPoint for VMs System | Number of Splitters: Yellow: 24, Orange: 27, Red: 30

RecoverPoint Cluster | Number of Consistency Groups: Yellow: 96, Orange: 109, Red:

122

RecoverPoint Cluster | Number of Protected VMs: Yellow: 225, Orange: 255, Red: 285

RecoverPoint Cluster | Number of vRPAs: Yellow: 8, Red: 1

Consistency Group | Enabled: Orange: Disabled, Red: Unknown

Splitter | Number of ESX Clusters Connected: (n/a)

Splitter | Number of Volumes Attached: Yellow: 1536, Orange: 1741, Red: 1946

RecoverPoint for VMs System

This widget displays the number of RecoverPoint splitters and RecoverPoint clusters.

It also provides summary information about the systems and clusters.

RecoverPoint Cluster

This widget displays the following for each RecoverPoint system, including summary information:

XtremIO Performance dashboard

73

EMC Storage Analytics Dashboards l l l l

The number of consistency groups and the number of clusters

The number of protected Virtual Machine Disks (VMDKs) and the number of protected user volumes

The number of protected virtual machines for each RecoverPoint system

The number of virtual RecoverPoint Appliances (vRPAs) for each cluster

Consistency Group

This widget displays allRecoverPoint for Virtual Machines consistency groups.

Splitter

This widget displays the following for each RecoverPoint system, including summary information: l l

The number of vSphere ESX Clusters connected to a given splitters

The number of attached volumes

RecoverPoint for VMs Performance dashboard

The RecoverPoint for VMs Performance dashboard provides a single view of the most important performance metrics for the resources.

The Performance dashboard displays two types of heat maps: l

Metrics with definitive measurements such as CPU usage (0–100%) are assigned color ranges from lowest (green) to highest (red).

l

Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Link | Lag (%)

This widget shows the percent of the current lag for the link and for protection.

Consistency Group | Protection Window

Current Protection Window (Hrs) shows the earliest point in hours for which

RecoverPoint can roll back the consistency group's replica copy. Current Protection

Window Ratio shows the ratio of the current protection window compared with the required protection window for the Consistency Group.

vRPA | CPU Utilization (%)

This widget shows the percent utilization of virtual RecoverPoint Appliance (vRPA)

CPUs.

Cluster

This widget shows the performance for incoming writes (IOPS and KB/s) to clusters.

Consistency Group

This widget shows the performance for incoming writes (IOPS and KB/s) to consistency groups.

vRPA

This widget shows the performance for incoming writes (IOPS and KB/s) to vRPAs.

Threshold settings are as follows: l

Link | Lag (%): Orange: 90, Red: 100

74

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards l vRPA | CPU Utilization (%): Yellow: 75, Orange: 85, Red: 95

RecoverPoint for VMs Performance dashboard

75

EMC Storage Analytics Dashboards

Topology dashboards

The topology dashboards provide an entry point for viewing resources and relationships between storage and virtual infrastructure objects for supported adapter instances.

Depending on the EMC Adapter instance you configured, click the: l l l l l l l

Isilon Topology tab

ScaleIO Topology tab

VNX Topology tab

VNXe Topology tab

VMAX Topology tab

VPLEX Topology tab

XtremIO Topology tab

Details for every object in every widget are available by selecting the object and clicking the Resource Detail icon at the top of each widget.

The topology dashboards contain the following widgets:

Resource Tree

This widget shows the end-to-end topology and health of resources across vSphere and storage domains. You can configure the hierarchy that is shown by changing the widget settings; changing these settings does not alter the underlying object relationships in the database. Select any resource in this widget to view related resources in the stack.

Health Tree

The Health Tree widget provides a navigable visualization of resources that have parent or child relationships to the resource you select in the Resource Tree widget.

Single-click to select resources, or double-click to change the navigation focus.

Metric Sparklines

This widget shows sparklines for the metrics of the resource you select in the

Resource Tree widget.

76

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

Metrics dashboards

The metrics dashboards display resources and metrics for storage systems and allow the user to view graphs of resource metrics.

Depending on the EMC Adapter instance you installed, click the: l l l l l l l

Isilon Metrics tab

ScaleIO Metrics tab

VNX Metrics tab

VNXe Metrics tab

VMAX Metrics tab

XtremIO Metrics tab

RecoverPoint for VMs Metrics tab

Widgets for the metrics dashboards are described next.

Resource Tree/Environment Overview

This widget shows the end-to-end topology and health of resources across vSphere and storage domains. You can configure the hierarchy that is shown by changing the widget settings; changing these settings does not alter the underlying object relationships in the database. Select any resource in this widget to view related resources in the stack.

Metric Selector/Metric Picker

This widget lists all the metrics that are collected for the resource you select in the

Resource Tree/Environment Overview widget. Double-click a metric to create a graph of the metric in the Metric Graph/Metric Chart widget.

Metric Graph/Metric Chart

This widget graphs the metrics you select in the Metric Selector/Metric Picker widget. You can display multiple metrics simultaneously in a single graph or in multiple graphs.

Resource Events (VNX/VNXe only)

The resource event widget shows a graph that illustrates the health of the selected object over a period of time. Object events are labeled on the graph. When you hover over or click a label, event details appear in a message box:

Id: 460

Start Time: May 23, 2014 4:30:52 AM

Cancel Time: May 23, 2014 4:38:28 AM

Trigger: Notification

Resource: Pool 0 (Storage Pool)

Details: FAST VP relocation completed.

The message box includes the event ID, start time, cancel time, trigger, resource name, and event details.

To close the message box, click the X button at the top-right corner.

Metrics dashboards

77

EMC Storage Analytics Dashboards

Top-N dashboards

78

Click a Top-N dashboard to view your top performing devices at a glance.

The Top-N dashboards are available for: l l l

Isilon

VNX

VNXe l l

VMAX

XtremIO l

RecoverPoint for Virtual Machines

Top performing devices are selected based on the current value of the associated metric that you configured for each widget. You can change the time period.

You can also change the number of objects in your top performer list.

Isilon

By default, a Top-N dashboard shows the top 10 devices in the following categories across your Isilon system.

l l l l l l l l

Top-10 Active Nodes (24h) by number of active clients

Top-10 CPU % Usage

Top-10 Disk Throughput Rate In by Write (MB/s)

Top-10 Disk Throughput Rate Out by Read (MB/s)

Top-10 Overall Cache Hit Rate (24 hr) (Bytes/s)

Top-10 L1 Cache Hit Rate (24 hr) (MB/s)

Top-10 L2 Cache Hit Rate (24 hr) (MB/s)

Top-10 L3 Cache Hit Rate (24 hr) (MB/s)

VNX and VNXe

By default, a Top-N dashboard shows the top five devices in the following categories across your VNX or VNXe systems: l l l l l

Top-5 by Read (IOPS)

Top-5 by Write (IOPS)

Top-5 by Read (MB/s)

Top-5 by Write (MB/s)

Top-5 by Consumed Capacity

VMAX

By default, a Top-N dashboard shows the top 10 devices in the following categories across your VMAX system: l l l l l l

Top-10 by Read (IOPS)

Top-10 by Write (IOPS)

Top-10 by Read (MB/s)

Top-10 by Write (MB/s)

Top-10 by Read Latency (ms)

Top-10 by Write Latency (ms)

EMC Storage Analytics 3.4 Installation and User Guide

EMC Storage Analytics Dashboards

XtremIO

By default, a Top-N dashboard shows the top 10 devices in the following categories across your XtremIO system.

l l l l l l l

Top-10 by Read (IOPS)

Top-10 by Write (IOPS)

Top-10 by Read Latency (usec)

Top-10 by Write (usec)

Top-10 by Read Block Size (KB)

Top-10 by Write Block Size (KB)

Top-10 by Total Capacity (GB)

RecoverPoint for Virtual Machines

By default, a Top-N dashboard shows the top 10 devices in the following categories across RecoverPoint for Virtual Machine systems: l l l l l l

Top-10 vRPAs by Incoming Writes (IO/s) (24h)

Top-10 vRPAs by Incoming Writes (KB/s) (24h)

Top-10 Clusters by Incoming Writes (IO/s) (24h)

Top-10 Clusters by Incoming Writes (KB/s) (24h)

Top-10 Consistency Groups by Incoming Writes (IO/s) (24h)

Top-10 Consistency Groups by Incoming Writes (KB/s) (24h)

Top-N dashboards

79

EMC Storage Analytics Dashboards

Dashboard XChange

The Dashboard XChange is a user community page for users to exchange EMC Storage

Analytics custom dashboards.

EMC Storage Analytics provides a set of default dashboards that provide you with a variety of functional views into your storage environment. EMC Storage Analytics also enables you to create custom dashboards to visualize collected data according to your own requirements. The Dashboard XChange is an extension of that feature that enables you to: l

Export custom dashboards to the Dashboard XChange to benefit a wider EMC Storage

Analytics community l

Import custom dashboards from the Dashboard XChange to add value to your own environment

The Dashboard XChange, hosted on the EMC Community Network, will also host dashboards designed by EMC to showcase widget functions that may satisfy a particular use-case in your environment. You can import these dashboards into your existing environment to enhance the functionality offered by EMC Storage Analytics. You can also edit imported dashboards to meet the specific requirements of your own storage environment.

The Dashboard XChange provides these resources to assist you in creating custom dashboards: l l

How-to video that shows how to create custom dashboards

Best practices guide that provides detailed guidelines for dashboard creation l

Slide show that demonstrates how to import dashboards from or export them to the

Dashboard XChange

The EMC Storage Analytics Dashboard XChange is available at https:// community.emc.com/community/products/storage-analytics . Note that there are

XChange Zones for supported platforms.

80

EMC Storage Analytics 3.4 Installation and User Guide

CHAPTER 4

Resource Kinds and Metrics

This chapter contains the following topics: l l l l l l l l l

Isilon metrics

........................................................................................................ 82

ScaleIO metrics

..................................................................................................... 86

VNX Block metrics

................................................................................................. 93

VNX File/eNAS metrics

........................................................................................ 102

VMAX metrics

......................................................................................................109

VNXe metrics

.......................................................................................................113

VPLEX metrics

..................................................................................................... 121

XtremIO metrics

.................................................................................................. 132

RecoverPoint for Virtual Machines metrics

...........................................................138

Resource Kinds and Metrics

81

Resource Kinds and Metrics

Isilon metrics

EMC Storage Analytics provides metrics for these resource kinds:

Note

Only the resource kinds with associated metrics are shown. Performance metrics that cannot be calculated are not displayed.

l l

Cluster metrics on page 82

Node metrics on page 83

Table 3 Cluster metrics

Metric group Metric Description

Summary

Capacity

Deduplication Deduplicated Data > Physical

(GB)

Deduplicated Data > Logical

(GB)

Space Saved > Physical (GB)

Performance

CPU % Use

Number of Total Jobs

Average CPU usage for all nodes in the monitored cluster

Total number of active and inactive jobs on the cluster

Number of Active Jobs

Total Capacity (TB)

Remaining Capacity (TB)

Remaining Capacity (%)

Total number of active jobs on the cluster

Total cluster capacity in terabytes

Total unused cluster capacity in terabytes

Total unused cluster capacity in percent

User Data Including Protection

(TB)

Amount of storage capacity that is occupied by user data and protection for that user data

Snapshots Usage (TB) Amount of data occupied by snapshots on the cluster

Space Saved > Logical (GB)

Disk Operations Rate > Read

Operations

Disk Operations Rate > Write

Operations

Pending Disk Operations

Latency (ms)

Disk Throughput Rate > Read

Throughput (MB/s)

Amount of data that has been deduplicated on the physical cluster

Amount of data that has been deduplicated on the logical cluster

Amount of physical space that deduplication has saved on the cluster

Amount of logical space that deduplication has saved on the cluster

Average rate at which the disks in the cluster are servicing data read change requests

Average rate at which the disks in the cluster are servicing data write change requests

Average amount of time disk operations spend in the input output scheduler

Total amount of data being read from the disks in the cluster

82

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 3 Cluster metrics (continued)

Metric group Metric

Cache

Description

Disk Throughput Rate > Write

Throughput (MB/s)

L1 Cache Hits (MB/s)

L2 Cache Hits (MB/s)

L3 Cache Hits (MB/s)

Total amount of data being written to the disks in the cluster

Amount of requested data that was available from the L1 cache

Amount of requested data that was available from the L2 cache

Amount of requested data that was available from the L3 cache

Overall Cache Hit Rate (MB/s) Amount of data requests that returned hits

Table 4 Node metrics

Metric group

Summary

Performance

Metric

CPU % Use

Number of Active Clients

Number of Connected

Clients

Number of Total Job

Workers

Deadlock File System

Event Rate

Locked File System Event

Rate

Blocking File System Event

Rate

Average Operations Size

(MB)

Contended File System

Event Rate

File System Event Rate

Disk Operations Rate >

Read Operations

Disk Operations Rate >

Write Operations

Description

Average percentage of the total available node

CPU capacity used for this node

Number of unique client addresses generating protocol traffic on the monitored node

Number of unique client addresses with established TCP connections to the node

Number of active and assigned workers on the node

Number of file system deadlock events that the file system is processing per second

Number of file lock operations occurring in the file system per second

Number of file blocking events occurring in the file system per second

Average size of the operations or transfers that the disks in the node are servicing

Number of file contention events, such as lock contention or read/write contention, occurring in the file system per second

Number of file system events, or operations,

(such as read, write, lookup, or rename) that the file system is servicing per second

Average rate at which the disks in the node are servicing data read requests

Average rate at which the disks in the node are servicing data write requests

Isilon metrics

83

Resource Kinds and Metrics

Table 4 Node metrics (continued)

Metric group Metric Description

Cache

Average Pending Disk

Operations Count

Disk Throughput Rate >

Read Operations

Disk Throughput Rate >

Write Operations

Pending Disk Operation

Latency (ms)

Average number of operations or transfers that are in the processing queue for each disk in the node

Total amount of data being read from the disks in the node

Total amount of data being written to the disks in the node

Average amount of time that disk operations spend in the input/output scheduler

Disk Activity (%) Average percentage of time that disks in the node spend performing operations instead of sitting idle

Protocol Operations Rate Total number of requests that were originated by clients for all file data access protocols

Slow Disk Access Rate Rate at which slow (long-latency) disk operations occur

External Network External Network Errors > In Number of incoming errors generated for the external network interfaces

External Network Errors >

Out

Number of outgoing errors generated for the external network interfaces

External Network Packets

Rate > In

External Network Packets

Rate > Out

External Network

Throughput Rate > In

(MB/s)

Total number of packets that came in through the external network interfaces in the monitored node

Total number of packets that went out through the external network interfaces in the monitored node

Total amount of data that came in through the external network interfaces in the monitored node

External Network

Throughput Rate > Out

(MB/s)

Total amount of data that went out through the external network interfaces in the monitored node

Average Cache Data Age

L1 Data Prefetch Starts

(Bytes/s)

L1 Data Prefetch Hits

(Bytes/s)

L1 Data Prefetch Misses

(Bytes/s)

Average amount of time data has been in the cache

Amount of data that was requested from the L1 prefetch

Amount of requested data that was available in the L1 prefetch

Amount of requested data that did not exist in the

L1 prefetch

L1 Cache Starts (Bytes/s) Amount of data that was requested from the L1 cache

84

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 4 Node metrics (continued)

Metric group Metric Description

L1 Cache Hits (Bytes/s) Amount of requested data that was available in the L1 cache

L1 Cache Misses (Bytes/s) Amount of requested data that did not exist in the

L1 cache

L1 Cache Waits (Bytes/s) Amount of requested data that existed in the L1 cache but was not available because the data was in use

L2 Data Prefetch Starts

(Bytes/s)

Amount of data that was requested from the L2 prefetch

L2 Data Prefetch Hits

(Bytes/s)

L2 Data Prefetch Misses

(Bytes/s)

Amount of requested data that was available in the L2 prefetch

Amount of requested data that did not exist in the

L2 prefetch

L2 Cache Starts (Bytes/s) Amount of data that was requested from the L2 cache

L2 Cache Hits (Bytes/s) Amount of requested data that was available in the L2 cache

L2 Cache Misses (Bytes/s) Amount of requested data that did not exist in the

L2 cache

L2 Cache Waits (Bytes/s) Amount of requested data that existed in the L2 cache but was not available because the data was in use

L3 Cache Starts (Bytes/s) The amount of data that was requested from the

L3 cache

L3 Cache Hits (Bytes/s) Amount of requested data that was available in the L3 cache

L3 Cache Misses (Bytes/s) Amount of requested data that did not exist in the

L3 cache

L3 Cache Waits (Bytes/s) Amount of requested data that existed in the L3 cache but was not available because the data was in use

Amount of data requests that returned hits Overall Cache Hit Rate

(Bytes/s)

Overall Cache Throughput

Rate (Bytes/s)

Amount of data that was requested from cache

Isilon metrics

85

Resource Kinds and Metrics

ScaleIO metrics

EMC Storage Analytics provides metrics for these resource kinds:

Note

Only the resource kinds with associated metrics are shown. Most performance metrics with values of zero are not displayed.

l l l l l l l l l l l l

System metrics on page 86

Protection Domain metrics on page 87

Device metrics on page 87

SDS metrics on page 88

Storage pool metrics on page 89

Snapshot metrics on page 90

MDM cluster metrics on page 91

MDM metrics on page 91

SDC metrics on page 91

SCSI initiator metrics

Fault Set metrics on page 91

Volume metrics on page 91

Table 5 System metrics

Metric Unit Description

Maximum Capacity

Used Capacity

GB

GB

Spare Capacity Allocated GB

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Unused Capacity

Used Capacity

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Total Reads

GB

GB

GB

GB

GB

MAX (changed from total) capacity for ScaleIO cluster

Total used capacity of ScaleIO cluster

Total Spare capacity allocated for the cluster and used by the

ScaleIO system in case of failure to rebuild the data

Thin used capacity for ScaleIO System

Thick used capacity for ScaleIO System

Protected capacity for ScaleIO System

Snap used capacity for ScaleIO System

Available capacity for ScaleIO System

%

%

%

%

Total used capacity of ScaleIO cluster

Thin used capacity for ScaleIO System

Thick used capacity for ScaleIO System

Protected capacity for ScaleIO System

% Snap used capacity for ScaleIO System

MB/s Number of read operations performed each second on all

ScaleIO volumes at ScaleIO cluster level

86

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 5 System metrics (continued)

Metric

Total Writes

Unit Description

MB/s Number of write operations performed each second on all

ScaleIO volumes at ScaleIO cluster level

Table 6 Protection Domain metrics

Metric

Maximum Capacity

Used Capacity

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Unused Capacity

Used Capacity

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Total Reads

Total Writes

Total Reads

Total Writes

Total Reads

Total Writes

Average Read IO Size

Average Write IO Size

Unit Description

GB

GB

%

%

GB

%

GB

GB

GB

GB

Max (changed from total) capacity for Protection Domain

Total Used capacity of Protection Domain

Thin Used capacity for Protection Domain

Thick Used capacity of Protection Domain

Protected capacity available in Protection Domain

Snap Used capacity of Protection Domain

Available capacity of Protection Domain

Total Used capacity of Protection Domain

Thin Used capacity for Protection Domain

Thick Used capacity of Protection Domain

%

%

Protected capacity available in Protection Domain

Snap Used capacity of Protection Domain

GB Number of read operations performed each second on all

ScaleIO volumes at Protection Domain level

MB/s Number of write operations performed each second on all

ScaleIO volumes at Protection Domain level

IO/s Number of read operations performed each second by the

Protection Domain

IO/s Number of write operations performed each second by the

Protection Domain

MB/s Cumulative number of megabytes read per second by the

Protection Domain

MB/s Cumulative number of megabytes written per second by the

Protection Domain

MB

MB

Average size in megabytes per IO read by the Protection Domain

Average size in megabytes per IO written by the Protection

Domain

Table 7 Device metrics

Metric

Maximum Capacity

Unit

GB

Description

Max (changed from total) capacity for Disk

ScaleIO metrics

87

Resource Kinds and Metrics

88

Table 7 Device metrics (continued)

Metric Unit Description

Used Capacity

Spare Capacity Allocated

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Unused Capacity

Used Capacity

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Total Reads

Total Writes

Total Reads

Total Writes

Total Reads

Total Writes

Average Read IO Size

Average Write IO Size

GB

GB

GB

GB

%

%

%

%

GB

GB

GB

%

Total Used capacity of Disk

Total Spare capacity allocated for the cluster and used by the

ScaleIO system in case of failure to rebuild the data

Thin Used capacity for Disk

Thick Used capacity of Disk

Protected capacity for ScaleIO System

Snap used capacity for ScaleIO System

Available capacity for ScaleIO System

Total Used capacity of Disk

Thin Used capacity for Disk

Thick Used capacity of Disk

Protected capacity for ScaleIO System

Snap used capacity for ScaleIO System

MB/s Number of forward rebuild read operations performed each second on all ScaleIO volumes at cluster level

MB/s Number of forward rebuild write operations performed each second on all ScaleIO volumes at cluster level

IO/s Number of read operations performed each second by the device

IO/s Number of write operations performed each second by the device

MB/s Cumulative number of megabytes read per second by the device

MB/s Cumulative number of megabytes written per second by the device

MB

MB

Average size in megabytes per IO read by the device

Average size in megabytes per IO written by the device

Table 8 ScaleIO Data Server metrics

Metric Unit Description

Maximum Capacity

Used Capacity

GB

GB

Spare Capacity Allocated GB

Thin Used Capacity GB

Thick Used Capacity

Protected Capacity

GB

GB

Max (changed from total) capacity for ScaleIO Data Server

Total Used capacity of ScaleIO Data Server

Thin Used capacity for ScaleIO Data Server

Thick Used capacity of ScaleIO Data Server

Snap Used capacity of ScaleIO Data Server

Available capacity of ScaleIO Data Server

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 8 ScaleIO Data Server metrics (continued)

Metric

Snap Used Capacity

Unused Capacity

Used Capacity

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Total Reads

Total Writes

Total Reads

Total Writes

Average Read IO Size

Average Write IO Size

Unit Description

GB

GB

%

%

Number of read operations performed each second on all

ScaleIO volumes at ScaleIO Data Server level

Number of write operations performed each second on all

ScaleIO volumes at ScaleIO Data Server level

Total Used capacity of ScaleIO Data Server

Thick Used capacity of ScaleIO Data Server

%

%

%

IO/s

Snap Used capacity of ScaleIO Data Server

Available capacity of ScaleIO Data Server

Number of read operations performed each second on all

ScaleIO volumes at ScaleIO Data Server level

Number of read operations performed each second by the

ScaleIO Data Server

IO/s Number of write operations performed each second by the

ScaleIO Data Server

MB/s Cumulative number of megabytes read per second by the

ScaleIO Data Server

MB/s Cumulative number of megabytes written per second by the

ScaleIO Data Server

MB Average size in megabytes per IO read by the ScaleIO Data

Server

MB Average size in megabytes per IO written by the ScaleIO Data

Server

Table 9 Storage pool metrics

Metric Unit Description

Maximum Capacity

Used Capacity

Unused Capacity

Capacity Available for

Volume Creation

Thin Used Capacity

Thick Used Capacity

GB

GB

Protected Capacity GB

Snapshot Used Capacity GB

Total Capacity Allocated

Used Capacity

GB

%

GB

GB

GB

GB

Max (changed from total) capacity for storage pool

Total Used capacity of storage pool

Free capacity available in storage pool

Capacity available(changed from used) for volume creation and mapping from the storage pool

Thin Used capacity for storage pool

Thick Used capacity of storage pool

Protected capacity available in storage pool

Snap Used capacity of storage pool

Total enabled capacity of storage pool

Total Used capacity of storage pool

ScaleIO metrics

89

Resource Kinds and Metrics

90

Table 9 Storage pool metrics (continued)

Metric

Thin Used Capacity

Thick Used Capacity

Protected Capacity

Snap Used Capacity

Total Reads

Total Writes

Total Reads

Total Writes

Average Read IO Size

Average Write IO Size

Unit Description

%

%

%

%

Thin Used capacity for storage pool

Thick Used capacity of storage pool

Protected capacity available in storage pool

Snap Used capacity of storage pool

IO/s

IO/s

Number of read operations performed each second by the storage pool

Number of write operations performed each second by the storage pool

MB/s Cumulative number of megabytes read per second by the storage pool

MB/s Cumulative number of megabytes written per second by the storage pool

MB

MB

Average size in megabytes per IO read by the storage pool

Average size in megabytes per IO written by the storage pool

Table 10 Snapshot metrics

Metric Unit Description

Size

Average Read IO

Size

Average Write IO

Size

GB

MB

Snapshot Size in GB

Total Read I/Os

Total Write I/Os

Total Reads

Total Writes

Total Reads

IO/s

IO/s

Number of read operations performed each second by the Snapshot

Number of write operations performed each second by the Snapshot

MB/s Cumulative number of megabytes read per second by the Snapshot

MB/s Cumulative number of megabytes written per second by the Snapshot

IO/s Number of read operations performed each second by the Snapshot

Total Writes

Total Reads

Total Writes

IO/s Number of write operations performed each second by the Snapshot

MB/s Cumulative number of megabytes read per second by the Snapshot

MB/s Cumulative number of megabytes written per second by the Snapshot

MB Average size in megabytes per IO read by the

Snapshot

Average size in megabytes per IO written by the

Snapshot

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 10 Snapshot metrics (continued)

Table 11 MDM cluster metrics

Metric

MDM Mode

State

Unit

String

String

Description

Single or cluster

NotClustered or ClusteredNormal or ClusteredDegraded or

ClusteredTiebreakerDown or ClusteredDegradedTiebreakerDown

Table 12 MDM metrics

Metric

Name

Unit

String

Table 13 SDC metrics

Metric Unit

IP

Total Reads

Total Writes

Total Reads

Total Writes

Average Read IO

Size

Average Write IO

Size

String

IO/s

IO/s

MB/s

MB/s

MB

MB

Table 14 Fault Set metrics

Metric name

Unit

String

Table 15 Volume metrics

Metric

Total Reads

Unit

IO/s

Total Writes IO/s

Total Reads MB/s

Description

Primary, Secondary, Tie Breaker, Management

Description

IP address of the SDC client

Number of read operations performed each second by the SDC

Number of write operations performed each second by the SDC

Cumulative number of megabytes read per second by the SDC

Cumulative number of megabytes written per second by the SDC

Average size in megabytes per IO read by theSDC

Average size in megabytes per IO written by the SDC

Description

Name of the Fault Set

Description

Number of read operations performed each second by the volume

Number of write operations performed each second by the volume

Cumulative number of megabytes read per second by the volume

ScaleIO metrics

91

Resource Kinds and Metrics

Table 15 Volume metrics (continued)

Metric

Total Writes

Unit

MB/s

Description

Cumulative number of megabytes written per second by the volume

Average size in megabytes per IO read by the volume Average Read

IO Size

Average Write

IO Size

MB

MB Average size in megabytes per IO written by the volume

92

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

VNX Block metrics

EMC Storage Analytics provides metrics for these resource kinds: l l l l l l l l l l

EMC Adapter Instance on page 93 (array)

Disk on page 93

FAST Cache on page 94

Pool LUN on page 95

RAID Group on page 96

RAID Group LUN on page 97

SP Front-end Port on page 98

Storage Pool on page 98

Storage Processor on page 100

Tier on page 101

Table 16 VNX Block metrics for Array

Metric

Elapsed collect time (ms)

New metrics in each collect call

New resources in each collect call

Number of down resources

Number of metrics collected

Number of resources collected

Additional information

Time elapsed during the collection

Number of new metrics per collection

Number of new resources per collection

Number of down resources for this adapter instance

Number of metrics collected by this adapter instance

Number of resources collected by this adapter instance

Table 17 VNX Block metrics for Disk

Metric

Busy (%)

Capacity (GB)

Hard Read Errors (Count)

Hard Write Errors (Count)

LUN Count

Queue Length

Read Size (MB)

Reads (IOPS)

Additional information

Percentage of time during which the disk is servicing any requests

Total capacity of the disk

Number of hard read errors

Number of hard write errors

Total number of LUNs that the disk is serving

Average number of requests within a polling interval that are waiting to be serviced by the disk, including the one currently in service

Average size of data read (appears in the

Performance

metric group)

Average number of read requests from the disk per second

VNX Block metrics

93

Resource Kinds and Metrics

94

Table 17 VNX Block metrics for Disk (continued)

Metric

Reads (MB/s)

Total Latency (ms)

Total Operations (IOPS)

Total Bandwidth (MB/s)

Write Size (MB)

Writes (IOPS)

Writes (MB/s)

Additional information

Average amount of data read from the disk in megabytes per second

Average time, in milliseconds, that it takes for one request to pass through the disk, including any waiting time

Total number of read and write requests per second that pass through the disk

Total number of host read and write data per second that pass through the disk

Average size of data written (appears in the

Performance metric group)

Average number of write requests to the disk per second

Average amount of data written to the disk in megabytes per second

Table 18 VNX Block metrics for FAST Cache

Metric

Current Operation

Current Operation Status

Current Operation Complete

(%)

Dirty (%)

Additional information

Creating

or

Destroying

If there is a current FAST Cache operation in progress such as destroying or creating, displays the percentage complete

If there is a current FAST Cache operation in progress such as destroying or creating, displays the percentage complete

Percentage of write cache pages owned by the SP that contain data that has not yet been flushed out to the FAST Cache

(appears in the

Performance

>

SPA

and

Performance

>

SPB metric groups)

Flushed (MB) Average amount of data in megabytes that was written from the write cache to the FAST Cache (appears in the

Performance

>

SPA

and

Performance

>

SPB

metric groups)

Mode

RAID Type

Read Cache Hit Ratio (%)

Read/Write

RAID type of FAST Cache

Ratio of read requests that the FAST Cache satisfied without requiring any disk access versus the total number of read requests. The higher the ratio the better the read performance

Read Cache Hits (Hits/s) Average number of read requests per second that were satisfied by the FAST Cache without requiring any disk access. Read requests that are not FAST Cache hits are read misses

Read Cache Misses (Misses/s) Average number of read requests per second that required one or multiple disk accesses

Size (GB) Capacity of FAST Cache

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 18 VNX Block metrics for FAST Cache (continued)

Metric Additional information

Write Cache Hit Ratio (%)

Write Cache Hits (Hits/s)

Ratio of write requests that the FAST Cache satisfied without requiring any disk access versus the total number of write requests. The higher the ratio the better the write performance

Average number of write requests per second that were satisfied by the FAST Cache without requiring any disk access. Write requests that are not FAST Cache hits are write misses

Write Cache Misses (Misses/s) Average number of write requests per second that required one or multiple disk accesses

Table 19 VNX Block metrics for Pool LUN

Metric

Average Busy Queue Length

Additional information

Average number of outstanding requests when the LUN was busy (appears in the

Performance

metric group)

Busy (%)

Extreme Performance Tier

Distribution (%)

Implicit trespasses (Count)

Fraction of an observation period during which a LUN has any outstanding requests (appears in the

Performance

>

SPA

and

Performance

>

SPB

metric groups)

When the LUN becomes the bottleneck, the utilization is at or near 100%. However, since the I/Os can be serviced by multiple disks, an increase in workload may still result in a higher throughput.

Capacity Tier Distribution (%) Distribution (%) of the Capacity Tier

Consumed Capacity (GB) Amount of space consumed in the pool by the LUN plus overhead

Explicit trespasses (Count) Number of trespasses since the last poll (appears in the

Performance

>

SPA

and

Performance

>

SPB

metric groups)

Default polling cycle is five minutes. Occurs as a result of an external command from a user or the failover software. When an

SP receives this command, LUN ownership is transferred to that

SP.

Distribution (%) of the Extreme Performance Tier

Number of trespasses since the last poll (appears in the

Performance

>

SPA

and

Performance

>

SPB

metric groups)

Default polling cycle is five minutes. Occurs as a result of software controls within the storage system. An implicit trespass occurs when the amount of I/O transferred across the non-optimal path exceeds the optimal path I/O by a specified threshold.

Initial Tier

Performance Tier Distribution

(%)

Initial tier that was used for initial placement of the new LUN

Distribution (%) of the Performance Tier

VNX Block metrics

95

Resource Kinds and Metrics

Table 19 VNX Block metrics for Pool LUN (continued)

Metric

Queue Length

Read Cache State

Read Size (MB)

Reads (IOPS)

Reads (MB/s)

Service Time (ms)

Tiering Policy

Total Latency (ms)

Total Operations (IOPS)

Total Bandwidth (MB/s)

User Capacity (GB)

Write Cache State

Write Size (MB)

Writes (IOPS)

Writes (MB/s)

Additional information

Length of the LUN queue

Enabled or disabled state of the read cache

Average size of data read (appears in the

Performance

metric group)

Average number of host read requests that is passed through the LUN per second. Smaller requests usually result in a higher read throughput than larger requests.

Average amount of host read data in megabytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests.

Average service time for successful completion of I/O without retries and queuing delays (appears in the

Performance

metric group)

Tiering policy of this Pool LUN

Average time, in milliseconds, that a request to this LUN is outstanding, including its waiting time

Total number of read and write requests per second that pass through the LUN

Total number of host read and write data per second that pass through the LUN

Amount of space consumed in the pool by the LUN

Enabled or disabled state of the write cache

Average size of data written (appears in the

Performance metric group)

Average number of host write requests that is passed through the LUN per second. Smaller requests usually result in a higher write throughput than larger requests.

Average amount of host write data in megabytes that is passed through the LUN per second. Larger requests usually result in higher bandwidth than smaller requests.

Table 20 VNX Block metrics for RAID Group

Metric

Available Capacity (GB)

Defragmented (%)

Disk Count

Additional information

Remaining free capacity of this RAID Group

When a defragment operation is in progress, this displays the percentage complete

Number of disks in this RAID Group

96

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 20 VNX Block metrics for RAID Group (continued)

Metric

Free Continuous Group of

Unbound Segments (GB)

Full (%)

LUN Count

Max Disks

Max LUNs

Raw Capacity (GB)

User Capacity (GB)

Additional information

Size in GB of the largest continuous span of free space in the

RAID Group. LUNs must fit into a contiguous span of free space.

Percentage of total capacity that is consumed

Number of LUNs in this RAID Group

Maximum number of disks allowed for this RAID Group

Maximum number of LUNs allowed for this RAID Group

Total amount of space available in the RAID Group prior to

RAID protection

Amount of space available in the RAID Group

Table 21 VNX Block metrics for RAID Group LUN

Metric

Average Busy Queue

Length

Busy (%)

Additional information

Average number of outstanding requests when the LUN was busy

(appears in the

Performance

metric group)

Fraction of an observation period during which a LUN has any outstanding requests (appears in the

Performance

>

SPA

and

Performance

>

SPB

metric groups)

When the LUN becomes the bottleneck, the utilization is at or near

100%. However, since the I/Os can be serviced by multiple disks, an increase in workload may still result in a higher throughput.

Queue Length

Read Cache State

Read Size (MB)

Reads (IOPS)

Reads (MB/s)

Service Time (ms)

Length of the LUN queue

Enabled or disabled state of the read cache

Average size of data read (appears in the

Performance

metric group)

Average number of host read requests that is passed through the LUN per second. Smaller requests usually result in a higher read throughput than larger requests.

Average amount of host read data in megabytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests.

Average service time for successful completion of I/O without retries and queuing delays (appears in the

Performance

metric group)

Total Latency (ms) Average time in milliseconds, that it takes for one request to pass through the LUN, including any waiting time. The higher the queue length for a LUN, the more requests are waiting in its queue, thus increasing the average latency of a single request. For a given workload, queue length and response time are directly proportional.

Total Operations (IOPS) Total number of read and write requests per second that pass through the LUN

VNX Block metrics

97

Resource Kinds and Metrics

98

Table 21 VNX Block metrics for RAID Group LUN (continued)

Metric

Total Bandwidth

(MB/s)

User Capacity (GB)

Write Cache State

Write Size (MB)

Writes (IOPS)

Writes (MB/s)

Additional information

Total number of host read and write data per second that pass through the LUN

Amount of space available in the RAID Group LUN

Enabled or disabled state of the write cache

Average size of data written (appears in the

Performance

metric group)

Average number of host write requests that is passed through the LUN per second

Average amount of host write data in megabytes that is passed through the LUN per second

Table 22 VNX Block metrics for Storage Pool Front-end Port

Metric

Queue Full Count

Reads (IOPS)

Reads (MB/s)

Total Operations (IOPS)

Total Bandwidth (MB/s)

Writes (IOPS)

Writes (MB/s)

Additional information

Number of times that a front-end port issued a queue full response to the hosts

Average number of read requests per second that pass through the SP front-end port

Average amount of data read from the disk in megabytes per second

Total number of read and write requests per second that pass through the SP front-end port

Total number of host read and write data per second that pass through the SP front-end port

Average number of write requests per second that pass through the SP front-end port

Average amount of data written to the disk in megabytes per second

Table 23 VNX Block metrics for Storage Pool

Metric Additional information

Available Capacity (GB)

Auto-Tiering

Auto-Tiering State

Consumed Capacity (GB)

Capacity available for use in this Storage Pool

Shows if auto-tiering is scheduled

Enabled or disabled state of auto-tiering

Capacity used in this Storage Pool

Current Operation Displays the current operation in the pool

Current Operation Complete (%) If there is a thin pool operation in progress such as a rebalance, displays the percentage complete

Current Operation State Displays the current operation state

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 23 VNX Block metrics for Storage Pool (continued)

Metric Additional information

Current Operation Status Displays additional descriptive information for the current state of the thin pool

Data Movement Completed (GB) Amount of data that has been moved up or down

Data to Move Down (GB) Amount of data that is going to be moved down

Data to Move Up (GB) Amount of data that is going to be moved up

Amount of data to move within tiers

Shared capacity of deduplicated LUNs

Data to Move Within (GB)

Deduplicated LUNs Shared

Capacity (GBs)

Deduplication and Snapshot

Savings (GBs)

Deduplication Rate

Deduplication State

Capacity savings through deduplication and Snapshots

Rate of deduplication

The deduplication state can take any of these values: l l l l l l l l

Idle (No deduplicated LUNs)

Idle (No deduplicated LUNS) - Faulted

Pending

Pending - Faulted

Running (% complete, GB remaining)

Running - Faulted (% complete, GB remaining)

Paused

Paused - Faulted

Disk Count

Disk Type

Estimated Time to Complete

FAST Cache

Full (%)

Initial Tier

LUN Count

Oversubscribed (GB)

Relocation Rate

Relocation Start Time

Relocation Status

Relocation Stop Time

Relocation Type

Number of disks consumed by this Storage Pool

Type of disks in this Storage Pool

Estimated time to complete the data relocation

Enabled or disabled state of the FAST Cache for this Storage

Pool

Percentage of total capacity that is consumed

Initial tier can be any of the values available for Tiering Policy

(above)

Number of LUNs hosted by this Storage Pool

How much the Storage Pool is oversubscribed

Rate at which relocation occurs

Start time for the relocation

Relocation is active or inactive

Stop time for the relocation

Scheduled or manual relocation

VNX Block metrics

99

Resource Kinds and Metrics

Table 23 VNX Block metrics for Storage Pool (continued)

Metric

Schedule Duration Remaining

Subscribed (%)

Threshold (%)

Tiering Policy

Additional information

If using scheduled relocation, displays the remaining time for the relocation

Percentage of total capacity that is subscribed

Threshold as percentage of total capacity

With FAST VP enabled, tiering policy can take any of these values: l l l

Start High then Auto-Tier (recommended)

Auto Tier

Highest Available Tier l l

Lowest Available Tier

No Data Movement

With FAST VP disabled, tiering policy can be: l l l

Optimize for Pool Performance (default)

Highest Available Tier

Lowest Available Tier

Table 24 VNX Block metrics for Storage Processor

Metric

Busy (%)

Dirty Cache Pages (%)

Dirty Cache Pages (MB)

Read Cache Hit Ratio (%)

Read Cache Size (MB)

Read Cache State

Read Size (MB)

Reads (IOPS)

Additional information

Percentage of time during which the SP is serving requests.

When the SP becomes the bottleneck, the utilization will be at or close to 100%. And increase in workload will have no further impact on the SP throughput, but the I/O response time will start increasing more aggressively.

Amount of dirty cache pages by percentage. This metric is for

1st generation VNX models.

Amount of dirty cache pages in megabytes. This metric is for

2nd generation VNX models.

Ratio of read requests that the SP Cache satisfied without requiring any disk access versus the total number of read requests

Size of the read cache in megabytes. This metric is only for 1st generation VNX models.

Enabled or disabled state of the read cache

Average size of data read (appears in the

Disk

metric group)

Average number of host read requests that is passed through the SP per second. Smaller requests usually result in a higher read throughput than larger requests.

100

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 24 VNX Block metrics for Storage Processor (continued)

Metric

Reads (MB/s)

Total Operations (IOPS)

Total Bandwidth (MB/s)

Write Cache Hit Ratio (%)

Write Cache Flushes (MB/s)

Write Cache Size (MB)

Write Cache State

Write Size (MB)

Writes (IOPS)

Writes (MB/s)

Additional information

Average amount of host read data in megabytes that is passed through the SP per second. Larger requests usually result in a higher bandwidth than smaller requests.

Total number of read and write requests per second at the time when the SP is polled

Total number of read and write requests that pass through the

SP per second

Ratio of write requests that the SP Cache satisfied without requiring any disk access versus the total number of write requests

Average amount of data in megabytes that was written from the write cache to the disks per second. The value is a measure of back-end activity.

Size of the write cache in megabytes. This metric is only for

1st generation VNX models.

Enabled or disabled state of the write cache

Average size of data written (appears in the

Disk

metric group)

Number of writes per second at the time when the SP is polled. Smaller requests usually result in a higher write throughput than larger requests.

Average write request size in megabytes that passes through the SP per second. Larger requests usually result in higher bandwidth than smaller requests.

Table 25 VNX Block metrics for Tier

Metric

Available Capacity (GB)

Consumed Capacity (GB)

Disk Count

Higher Tier (GB)

Lower Tier (GB)

RAID Type

Subscribed (%)

User Capacity (GB)

Additional information

Capacity still available for use

Used capacity

Number of disks in the tier

Amount of data targeted for higher tiers

Amount of data targeted for lower tiers

Type of RAID applied to the tier

Percentage of tier that is subscribed

Free capacity for users

VNX Block metrics

101

Resource Kinds and Metrics

VNX File/eNAS metrics

EMC Storage Analytics provides metrics for these resource kinds: l l l l l

EMC Adapter Instance on page 102

(array)

Data Mover on page 102 (includes Virtual Data Mover)

dVol on page 107

File Pool on page 107

File System on page 107

Table 26 VNX File/eNAS metrics for Array

Metric Description

Elapsed collect time (ms) Amount of elapsed time for the collection

New metrics in each collect call Number of new metrics per collection

New resources in each collect call Number of new resources per collection

Number of down resources Number of down resources for this adapter instance

Number of metrics collected

Number of resources collected

Number of metrics collected by this adapter instance

Number of resources collected by this adapter instance

Table 27 VNX File/eNAS metrics for Data Mover

Metric Group

Cache

Configuration

CPU

Disk

Network

Metric

Buffer Cache Hit

Ratio (%)

DNLC Hit Ratio

(%)

Open File Cache

Hit Ratio (%)

Type

Description

Buffer Cache Hit ratio percentage

Directory Name Lookup Cache (DNLC) hit ratio percentage used for pathname resolution logic

Open File Cache Hit ratio percentage

Busy (%)

Reads (MB/s)

Total Bandwidth

(MB/s)

Writes (MB/s)

Data Mover type (value can be Active (for the primary

Data Mover), Standby, or VDM)

CPU utilization percentage during this interval

Storage in megabytes per second received from all serverstorage interfaces

Total bandwidth in megabytes per second for the Data

Mover

Storage in megabytes per second sent to all serverstorage interfaces

Average CIFS read size CIFS Average

Read Size (KB)

CIFS Average

Write Size (KB)

Average CIFS write size

102

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 27 VNX File/eNAS metrics for Data Mover (continued)

Metric Group

Network > NFSv2,

NFSv3, and

NFSv4

Metric Description

CIFS Reads (IOPS) IOs per second for CIFS reads

CIFS Reads

(MB/s)

Megabytes per second for CIFS reads

CIFS Total

Operations (IOPS)

Total number of CIFS read and write requests per second that pass through the Data Mover

CIFS Total

Bandwidth

(MB/s)

Total number of CIFS read and write data per second that pass through the Data Mover

CIFS Writes (IOPS) Input/output operations per second for CIFS writes

CIFS Writes

(MB/s)

Megabytes per second for CIFS writes

NFS Average Read

Size (Bytes)

Average size of data read

NFS Average

Write Size (Bytes)

Average size of data written

NFS Reads (IOPS) NFS read operations per second

NFS Reads (MB/s) NFS read data response in megabytes per second

NFS Total

Bandwidth

(MB/s)

Total number of NFS read and write data per second that pass through the Data Mover

NFS Total

Operations (IOPS)

Total number of NFS read and write requests per second that pass through the Data Mover

NFS Writes (IOPS) NFS write operations per second

NFS Writes

(MB/s)

NFS write data response in megabytes per second

Network in bandwidth (megabytes received per second) Network In

Bandwidth

(MB/s)

Network Out

Bandwidth

(MB/s)

Network out bandwidth (megabytes sent per second)

Total network bandwidth per second Total Network

Bandwidth

(MB/s)

Read Calls/s

Read Errors/s

Read Response

Time (ms)

Read calls per second

Read errors per second

Total read response time

Write Calls/s Write calls per second

VNX File/eNAS metrics

103

Resource Kinds and Metrics

104

Table 27 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric Description

Write Errors/s

Write Response

Time (ms)

Network > NFSv3 Access Calls/s

Access Errors/s

Write errors per second

Total write response time

Access calls per second

Access errors per second

Access Response

Time (ms)

GetAttr Calls/s

GetAttr Errors/s

GetAttr Response

Time (ms)

Total access response time

Get file attributes (GetAttr) per second

GetAttr errors per second

Total response time for GetAttr

Lookup Calls/s Lookup calls per second

Lookup Errors/s Lookup errors per second

Lookup Response

Time (ms)

SetAttr Calls/s

Total lookup response time

Set file attributes (SetAttr) per second

SetAttr errors per second

Total response time for SetAttr

SetAttr Errors/s

SetAttr Response

Time (ms)

Network > NFSv4 Close Calls/s

Close Errors/s

Close Response

Time (ms)

Compound

Calls/s

Compound

Errors/s

Compound

Response Time

(ms)

Close calls per second

Close errors per second

Total close response time

Compound calls per second

Compound errors per second

Total compound response time

Open Calls/s

Open Errors/s

Open Response

Time (ms)

Network > SMB1 Close Average

Response Time

(ms)

Close Calls/s

Open calls per second

Open errors per second

Total open response time

Close average response time

Close calls per second

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 27 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric Description

Close Max

Response Time

(ms)

NTCreateX

Average

Response Time

(ms)

Close maximum response time

NTCreateX average response time

NTCreateX Calls/s NTCreateX calls per second

NTCreateX Max

Response Time

(ms)

NTCreateX maximum response time

Average response time for ReadX ReadX Average

Response Time

(ms)

ReadX Calls/s

ReadX Max

Response Time

(ms)

Trans2Prim

Average

Response Time

(ms)

Trans2Prim

Calls/s

Trans2Prim Max

Response Time

(ms)

WriteX Average

Response Time

(ms)

WriteX Calls/s

ReadX calls per second

ReadX maximum response time

Trans2Prim average response time

Trans2Prim calls per second

Trans2Prim maximum response time

WriteX average response time

WriteX calls per second

WriteX maximum response time WriteX Max

Response Time

(ms)

Network > SMB2 Close Average

Response Time

(ms)

Close Calls/s

Close Max

Response Time

(ms)

Close average response time

Close calls per second

Close maximum response time

VNX File/eNAS metrics

105

Resource Kinds and Metrics

Table 27 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric Description

Flush Average

Response Time

(ms)

Flush Calls/s

Flush Max

Response Time

(ms)

Create Average

Response Time

(ms)

Flush average response time

Flush calls per second

Flush maximum response time

Create average response time

Create Calls/s

Create Max

Response Time

(ms)

IOCTL Average

Response Time

(ms)

IOCTL Calls/s

Create calls per second

Create maximum response time

Input/Output Control average response time

IOCTL Max

Response Time

(ms)

Queryinfo

Average

Response Time

(ms)

Input/Output Control (IOCTL) calls per second

Input/Output Control maximum response time

Query information average response time

Queryinfo Calls/s Query information calls per second

Queryinfo Max

Response Time

(ms)

Query information maximum response time

Read average response time Read Average

Response Time

(ms)

Read Calls/s

Read Max

Response Time

(ms)

Write Average

Response Time

(ms)

Write Calls/s

Read calls per second

Read maximum response time

Write average response time

Write calls per second

106

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 27 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric

Write Max

Response Time

(ms)

Description

Write maximum response time

Table 28 VNX File/eNAS metrics for dVol

Metric Description

Average Read Size (Bytes)

Average Write Size (Bytes)

Average size of data read

Average size of data written

Average Completion Time (µSec/call) Average time for completion of an I/O

Average Service Time (µSec/call) Average service time for successful completion of I/O without retries and queuing delays

Capacity (GB)

IO Retries (IOPS)

Total capacity of the disk volume

Number of I/O retries per second

Queue Length

Reads (IOPS)

Reads (MB/s)

Total Operations (IOPS)

Length of disk queue

Number of read operations on the disk per second

Megabytes read from the disk per second

Number of I/O operations on the disk volume per second

Total Bandwidth (MB/s)

Utilization (%)

Writes (IOPS)

Writes (MB/s)

Total bandwidth of the disk volume

Percentage time that disk has been utilized

Number of write operations on the disk per second

Mbytes written to the disk per second

Table 29 VNX File/eNAS metrics for File Pool

Metric

Available Capacity (GB)

Capacity (GB)

Consumed Capacity (GB)

Additional information

Capacity still available for use

Total capacity of the file pool

Consumed capacity of the file pool

Table 30 VNX File/eNAS metrics for File System

Metric

Available Capacity (GB)

Average Read Size (Bytes)

Average Write Size (Bytes)

Description

Capacity still available for use

Average size of data read

Average size of data written

VNX File/eNAS metrics

107

Resource Kinds and Metrics

Table 30 VNX File/eNAS metrics for File System (continued)

Metric

Capacity (GB)

Consumed Capacity (GB)

Full (%)

Thin Provisioning

Max Capacity (GB)

Read IO Ratio (%)

Read Requests (Requests/s)

Reads (IOPS)

Reads (MB/s)

Total Bandwidth (MB/s)

Total Operations (IOPS)

Write IO Ratio (%)

Write Requests (Requests/s)

Writes (IOPS)

Writes (MB/s)

Description

Total space available for storage of user data (does not include metadata)

Consumed capacity of the file system

Percentage of consumed capacity

True

indicates that the file system is enabled for virtual provisioning, an option that can only be used with automatic file system extension. Combining automatic file system extension with virtual provisioning allows growth of the file system gradually and as needed. When virtual provisioning is enabled, NFS and CIFS clients receive reports for either the virtual maximum file system size or real file system size, whichever is larger.

If automatic extension is enabled, the file system will automatically extend to this maximum size when the high water mark is reached. The default value for the high water mark is 90 percent.

Percentage of total I/Os that are read I/Os

Read operations per second in the interval

Average read operations per second

Read data response per second

Total number of read and write data per second for the file system

Total number of read and write data per second for the file system

Percentage of total I/Os that are write I/Os

Write operations per second in the interval

Average write operations per second

Write data response per second

108

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

VMAX metrics

EMC Storage Analytics provides metrics for the following applicable VMAX and VMAX3 resource kinds: l l l l l l l l l

Device on page 109

Front-End Director on page 110

Front-End Port on page 110

Remote Replica Group on page 110

SRDF Director on page 110

Storage Group on page 111

FAST VP Policy on page 109

Thin Pool on page 112

Storage Resource Pool on page 112

Table 31 VMAX metrics for Device

Metric

Read Latency (ms)

Reads (IOPS)

Reads (MB/s)

Total Bandwidth (MB/s)

Total Capacity

Total Operations (IOPS)

Used Capacity

Write Latency (ms)

Writes (IOPS)

Writes (MB/s)

Description

Average time it took the VMAX array to serve one read I/O for this device

Number of read operations performed each second on the device

Number of megabytes read per second from the device

Total number of read and write megabytes performed each second on the device

Total capacity available (in GBs) on the device

Total reads and writes performed each second on the device

Capacity used (in GBs) by the device

Average time it took the VMAX array to serve one write I/O for this device

Total number of write I/O operations performed each second by the VMAX volume (LUN)

Cumulative number of megabytes written per second to the device

Table 32 VMAX metrics for FAST VP Policy

Metric group Metric Description

Tier 1, Tier 2, Tier 3, Tier 4 Name Name of the FAST VP policy

Percent in policy Percentage of tier within a policy

VMAX metrics

109

Resource Kinds and Metrics

110

Table 33 VMAX metrics for Front-End Director

Metric

Reads (IOPS)

Total Bandwidth (MB/s)

Total Hits (IOPS)

Total Operations (IOPS)

Writes (IOPS)

Description

Total read operations the front-end director processes per second

Total number of megabytes sent and received per second by the director

Total number of requests per second that were immediately serviced from cache

Total reads and writes the front-end director processes per second

Total write operations the front-end director processes per second

Table 34 VMAX metrics for Front-End Port

Metric

Total Operations (IOPS)

Total Bandwidth (MB/s)

Description

Total reads and writes the front-end port processes per second

Number of megabytes transferred per second

Table 35 VMAX metrics for Remote Replica Group

Metric Description

Average Cycle Time (s) Average time it takes for each cycle to complete

Delta Set Extension Threshold Percent of write pendings before DSE activates

Devices in Session (count)

HA Repeat Writes (counts/s)

Number of devices in the group

Total host adapter repeat writes, measured in write commands to SRDF/A volumes only (writes to a slot already in the active cycle)

This counter helps estimate the cache locality of reference, that is, how much cache is saved by the re-writes. This does not give any indication to the bandwidth locality of reference.

Minimum Cycle Time (s)

Writes (IOPS)

Writes (MB/s)

Setting for the minimum number of seconds for a cycle

Number of write input/output operations coming in per second for the volumes in this session

Total number of megabytes sent per second by the group

Table 36 VMAX metrics for SRDF Director

Metric

Reads (IOPS)

Total Bandwidth (MB/s)

Description

Read operations the SRDF director processes per second

Total number of megabytes sent and received per second by the

RDF director

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 36 VMAX metrics for SRDF Director (continued)

Metric

Total Operations (IOPS)

Writes (IOPS)

Percent Busy (%)

SRDFA Writes (IOPS)

SRDFA Writes (MB/s)

SRDFS Writes (IOPS)

SRDFS Writes (MB/s)

Description

Total reads and writes the SRDF director processes per second

Write operations the SRDF director processes per second

Percent of time the director was busy

Number of asynchronous write requests per second processed by this director

Number of asynchronous megabytes sent per second by this director

Number of synchronous write requests per second processed by this director

Number of synchronous megabytes sent per second by this director

Table 37 VMAX metrics for Storage Group

Metric

Read Latency (ms)

Reads (IOPS)

Reads (MB/s)

Total Bandwidth (MB/s)

Total Capacity

Total Latency (ms)

Total Operations (IOPS)

Used Capacity

Write Latency (ms)

Writes (IOPS)

Writes (MB/s)

Description

Average time it took the VMAX array to serve one Read I/O for this storage group

Number of read operations performed each second by the storage group

Cumulative number of megabytes read per second by the storage group

Total number of megabytes sent and received per second by the storage group

Total capacity available (in GBs) for the storage group

Average time it took the VMAX array to serve one I/O for this storage group

Total reads and writes performed each second by the storage group

Capacity used (in GBs) by the the storage group

Average time it took the VMAX array to serve one Write I/O for this storage group

Number of write operations performed each second by the storage group

Cumulative number of megabytes written per second by the storage group

VMAX metrics

111

Resource Kinds and Metrics

Table 38 VMAX metrics for Thin Pool

Metric

Allocated Capacity (GB)

Full (%)

Total Capacity (GB)

Used Capacity (GB)

Description

Allocated thin pool capacity

Percent of the thin pool that has been allocated

Total thin pool capacity

Used thin pool capacity

Table 39 VMAX3 metrics for Storage Resource Pool

Metric Description

Capacity/Full (%) Percent of the pool that has been allocated

Capacity/Total Managed Space Capacity and total managed space

Read (IO/s)

Read (MB/s)

Number of read operations performed each second by the storage resource pool

Number of write operations performed each second by the storage resource pool

Write (IO/s)

Write (MB/s)

Total Operations (IO/s)

Cumulative number of megabytes read per second by the storage resource pool

Cumulative number of megabytes written per second by the storage resource pool

Total reads and writes performed each second by the storage resource pool

Total Bandwidth (MB/s)

Total Latency (ms)

Total number of megabytes sent and received per second by the storage resource pool

Average time it took the VMAX array to serve one I/O for this storage resource pool

112

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

VNXe metrics

EMC Storage Analytics provides metrics for these resource kinds:

Note

Only the resource kinds with associated metrics are shown.

l l l l l l l l l l

EMC Adapter Instance on page 113

Disk on page 113

FAST Cache on page 114

File System on page 114

LUN on page 114

Storage Pool on page 115

Storage Processor on page 115

Tier on page 119

vVols on page 120

Virtual Disk on page 120

Table 40 VNXe metrics for EMC Adapter Instance

Metric

Elapsed collect time (ms)

New metrics in each collect call

New resources in each collect call

Number of down resources

Number of metrics collected

Number of resources collected

Description

Time elapsed during the collection

Number of new metrics per collection

Number of new resources per collection

Number of down resources for this adapter instance

Number of metrics collected by this adapter instance

Number of resources collected by this adapter instance

Table 41 VNXe metrics for Disk

Metric group Metric Description

Capacity

Configuration

Performance

Size (GB)

State

Busy (%)

Reads ()

Total capacity of the disk

Current state of the disk

Percentage of time during which the disk is servicing any requests

Percentage of time during which the disk is servicing any requests

Reads (MB/s) Average amount of data read from the disk in megabytes per second

Total Latency

(ms)

Average time, in milliseconds, that it takes for one request to pass through the disk, including any waiting time

VNXe metrics

113

Resource Kinds and Metrics

114

Table 41 VNXe metrics for Disk (continued)

Metric group Metric Description

Writes (IOPS) Average number of write requests to the disk per second

Writes (MB/s) Average amount of data written to the disk in megabytes per second

Table 42 VNXe metrics for Fast Cache

Metric group Metric

Capacity Available

Capacity (GB)

Configuration Raid Type

Description

Capacity still available for use

RAID type of FAST Cache

Table 43 VNXe metrics for File System

Metric group Metric

Capacity

Description

Available

Capacity (GB)

Consumed

Capacity (GB)

Capacity still available for use

Capacity (GB) Total space available for storage of user data (does not include metadata)

Consumed capacity of the file system

Full (%)

Max Capacity

(GB)

Percentage of consumed capacity

If automatic extension is enabled, the file system will automatically extend to this maximum size when the high water mark is reached. The default value for the high water mark is 90 percent.

Thin

Provisioning

True

indicates that the file system is enabled for virtual provisioning, an option that can only be used with automatic file system extension. Combining automatic file system extension with virtual provisioning allows growth of the file system gradually and as needed. When virtual provisioning is enabled,

NFS and CIFS clients receive reports for either the virtual maximum file system size or real file system size, which ever is larger.

Table 44 VNXe metrics for LUN

Metric group Metric Description

Capacity Consumed

Capacity (GB)

Total Capacity

(GB)

Capacity used in this LUN

Total LUN capacity

Performance Queue Length Length of the LUN queue

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 44 VNXe metrics for LUN (continued)

Metric group Metric Description

Reads (IOPS) Average number of host read requests that is passed through the LUN per second. Smaller requests usually result in a higher read throughput than larger requests.

Reads (MB/s) Average amount of host read data in megabytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests.

Writes (IOPS) Average number of host write requests that is passed through the LUN per second

Writes (MB/s) Average amount of host write data in megabytes that is passed through the LUN per second

Table 45 VNXe metrics for Storage Pool

Metric group Metric

Capacity Available

Capacity (GB)

Consumed

Capacity (GB)

Full (%)

Subscribed

(%)

User Capacity

(GB)

Configuration Fast Cache

Description

Capacity available for use in this storage pool

Capacity used in this storage pool

Percentage of total capacity that is consumed

Percentage of total capacity that is subscribed

Amount of space available in the storage pool

Enabled or disabled state of the FAST Cache for this storage pool

Table 46 VNXe metrics for Storage Processor

Metric group

Cache

Network

Metric

Dirty Cache Pages (MB)

Read Cache Hit Ratio (%)

Write Cache Hit Ratio (%)

CIFS Reads (IOPS)

CIFS Reads (MB/s)

CIFS Writes (IOPS)

Description

Amount of dirty cache pages in megabytes

Ratio of read requests that the SP cache satisfied without requiring any disk access versus the total number of read requests

Ratio of write requests that the SP cache satisfied without requiring any disk access versus the total number of write requests

Input/output operations per second for CIFS reads

Megabytes per second for CIFS reads

Input/output operations per second for CIFS writes

VNXe metrics

115

Resource Kinds and Metrics

116

Table 46 VNXe metrics for Storage Processor (continued)

Metric group Metric Description

CIFS Writes (MB/s)

Network In Bandwidth

(MB/s)

Network Out Bandwidth

(MB/s)

NFS Reads (IOPS)

Megabytes per second for CIFS writes

Network in bandwidth (megabytes received per second)

Network out bandwidth (megabytes sent per second)

NFS read operations per second

NFS Reads (MB/s) NFS read data response in megabytes per second

NFS write operations per second NFS Writes (IOPS)

NFS Writes (MB/s)

Network > NFSv2 Read Calls/s

Read Errors/s

NFS write data response in megabytes per second

Read calls per second

Read errors per second

Read Response Time (ms) Read response time

Reads (IOPS) NFS V2 read operations per second

Write Calls/s Write calls per second

Write Errors/s Write errors per second

Write Response Time (ms) Write average response time

Writes (IOPS)

Network > NFSv3 Access Calls/s

NFS V2 write operations per second

Access calls per second

Access Errors/s Access errors per second

Access Response Time (ms) Access average response time

GetAttr Calls/s

GetAttr Errors/s

Get file attributes (GetAttr) per second

GetAttr errors per second

GetAttr Response Time (ms) GetAttr average response time

Lookup Calls/s Lookup calls per second

Lookup Errors/s

Lookup Response Time

(ms)

Lookup errors per second

Lookup average response time

Read Calls/s

Read Errors/s

Read calls per second

Read errors per second

Read Response Time (ms) Read average response time

Reads (IOPS) NFS V3 read operations per second

SetAttr Calls/s

SetAttr Errors/s

SetAttr per second

SetAttr errors per second

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 46 VNXe metrics for Storage Processor (continued)

Metric group Metric Description

SetAtt Response Time (ms) Set file attributes (SetAttr) average response time

Write Calls/s

Write Errors/s

Write calls per second

Write errors per second

Write Response Time (ms) Write average response time

NFS V3 write operations per second

Close average response time

Writes (IOPS)

Network > SMB1 Close Average Response

Time (ms)

Close Calls/s

Close Max Response Time

(ms)

NTCreateX Average

Response Time (ms)

NTCreateX Calls/s

NTCreateX Max Response

Time (ms)

Reads (IOPS)

Close calls per second

Close maximum response time

NTCreateX average response time

NTCreateX calls per second

NTCreateX maximum response time

Input/output operations per second for CIFS

SMB1 reads

Megabytes per second for CIFS SMB1 reads

Average response time for ReadX

Reads (MB/s)

ReadX Average Response

Time (ms)

ReadX Calls/s

ReadX Max Response Time

(ms)

Trans2Prim Average

Response Time (ms)

Trans2Prim Calls/s

Trans2Prim Max Response

Time (ms)

Writes (IOPS)

ReadX calls per second

Maximum response time for ReadX

Trans2Prim average response time

Trans2Prim calls per second

Trans2Prim maximum response time

Writes (MB/s)

WriteX Average Response

Time (ms)

WriteX Calls/s

WriteX Max Response Time

(ms)

Input/output operations per second for CIFS

SMB1 writes

Megabytes per second for CIFS SMB1 writes

WriteX average response time

WriteX calls per second

WriteX maximum response time

VNXe metrics

117

Resource Kinds and Metrics

Table 46 VNXe metrics for Storage Processor (continued)

Metric group Metric

Network > SMB2 Close Average Response

Time (ms)

Close Calls/s

Close Max Response Time

(ms)

Create Average Response

Time (ms)

Create Calls/s

Create Max Response Time

(ms)

Flush Average Response

Time (ms)

Flush Calls/s

Flush Max Response Time

(ms)

Ioctl Average Response

Time (ms)

Ioctl Calls/s

Ioctl Max Response Time

Queryinfo Average

Response Time (ms)

Queryinfo Calls/s

Queryinfo Max Response

Time (ms)

Read Average Response

Time (ms)

Read Calls/s

Read Max Response Time

(ms)

Reads (IOPS)

Reads (MB/s)

Write Average Response

Time (ms)

Write Calls/s

Write Max Response Time

(ms)

Description

Close average response time

Close calls per second

Close maximum response time

Create average response time

Create calls per second

Create maximum response time

Flush average response time

Flush calls per second

Flush maximum response time

IO Control (IOCTL) average response time

IOCTL calls per second

IOCTL maximum response time

Queryinfo average response time

Queryinfo calls per second

Queryinfo maximum response time

Read average response time

Read calls per second

Read maximum response time

Input/output operations per second for CIFS

SMB2 reads

Megabytes per second for CIFS SMB2 reads

Write average response time

Write calls per second

Write maximum response time

118

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 46 VNXe metrics for Storage Processor (continued)

Metric group

Performance

Metric

Writes (IOPS)

Writes (MB/s)

Busy (%)

Reads (IOPS)

Reads (MB/s)

Writes (IOPS)

Writes (MB/s)

Description

Input/output operations per second for CIFS

SMB2 writes

Megabytes per second for CIFS SMB2 writes

Percentage of time during which the SP is serving requests. When the SP becomes the bottleneck, the use will be at or close to 100%.

An increase in workload will have no further impact on the SP throughput, but the I/O response time will start increasing more aggressively.

Average number of host read requests that is passed through the SP per second. Smaller requests usually result in a higher read throughput than larger requests.

Average amount of host read data in megabytes that is passed through the SP per second.

Larger requests usually result in a higher bandwidth than smaller requests.

Number of writes per second at the time when the SP is polled. that is passed through the SP per second. Smaller requests usually result in a higher write throughput than larger requests.

Average write request size in megabytes that passes through the SP per second. Larger requests usually result in higher bandwidth than smaller requests.

Table 47 VNXe metrics for Tier

Metric Description

Available Capacity (GB) Capacity available for use in this tier

Consumed Capacity (GB) Capacity used in this tier

Data to Move Down (GB) Amount of data that is going to be moved down

Data to Move Up (GB) Amount of data that is going to be moved up

Data to Move Within (GB) Amount of data to move within tiers

Disk Count Number of disks in the tier

Full (%)

Raid Type

User Capacity (GB)

Percentage of total capacity that is consumed

Type of RAID applied to the tier

Amount of space available in this tier

VNXe metrics

119

Resource Kinds and Metrics

Table 48 VNXe metrics for vVNX virtual volumes (vVols)

Metric group

Capacity

Metric

Available

Capacity (GB)

Consumed

Capacity (GB)

Total Capacity

(GB)

Description

Capacity available for use

Capacity used

Total capacity

Table 49 VNXe metrics for vVNX virtual disk

Metric group

Capacity

Configuration

Metric

Size

State

Description

Size of the virtual disk

Current state of the virtual disk

120

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

VPLEX metrics

Metric group

Status

Status

Status

Metric

Cluster Type

Health State

EMC Storage Analytics provides metrics for these resource kinds: l l l l l l l l l l l l l

Cluster on page 121

Director on page 122

Distributed Device on page 125

Engine on page 126

Ethernet Port on page 127

Extent on page 127

FC Port on page 127

Local Device on page 128

Storage Array on page 129

Storage View on page 129

Storage Volume on page 129

Virtual Volume on page 130

VPLEX Metro on page 131

Table 50 VPLEX metrics for Cluster

Operational Status

Description

Local or Metro.

Possible values include: l l l l l l

OK

- Cluster is functioning normally.

Degraded

- Cluster is not functioning at an optimal level. This may indicate non-functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

Unknown

- VPLEX cannot determine the cluster's health state, or the state is invalid.

Major failure

- Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of backend connectivity.

Minor failure

- Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.

Critical failure

- Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.

During transition periods, the cluster moves from one operational state to another. Possible values include: l

OK

- Cluster is operating normally.

VPLEX metrics

121

Resource Kinds and Metrics

Metric group

Capacity

Metric

Table 50 VPLEX metrics for Cluster (continued)

Exported Virtual Volumes

Exported Virtual Volumes (GB)

Used Storage Volumes

Used Storage Volumes (GB)

Unused Storage Volumes

Unused Storage Volumes (GB)

Description l l l l l l l l

Cluster departure

- One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.

Degraded

- Cluster is not functioning at an optimal level. This may indicate non-functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

Device initializing

- If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.

Device out of date

- Child devices are being marked fully out of date. Sometimes this occurs after a link outage.

Expelled

- Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).

Shutdown

- Cluster's directors are shutting down.

Suspended exports

- Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.

Transitioning

- Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).

Number of exported virtual volumes.

Gigabytes of exported virtual volumes.

Number of used storage volumes.

Gigabytes of used storage volumes.

Number of unused storage volumes.

Gigabytes of unused storage volumes.

Metric Group

CPU

Status

122

Table 51 VPLEX metrics for Director

Metric

Busy (%)

Operational Status

Description

Percentage of director CPU usage

Possible values include: l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Metric Group

Storage Volumes

Virtual Volumes

Memory

Front-end Director

Back-end Director

Table 51 VPLEX metrics for Director (continued)

Metric Description l

Lost-communication

- Object is unreachable

Read Latency (ms)

Write Latency (ms)

Read Latency (ms)

Reads (MB/s)

Total Reads and Writes

(counts/s)

Write Latency (ms)

Writes (MB/s)

Memory Used (%)

Number of bytes read per second

Total number of reads and writes per second

Average write latency in milliseconds

Number of bytes written per second

Percentage of memory heap usage by the firmware for its accounting on the director. This value is not the percentage of cache pages in use for user data

Aborts (counts/s)

Active Operations (counts)

Number of aborted I/O operations per second through the director's front-end ports

Number of active, outstanding I/O operations on the director's front-end ports

Compare and Write Latency (ms) Average time, in milliseconds, that it takes for VAAI CompareAndWrite request to complete on the director's front-end ports

Operations (counts/s)

Average read latency in milliseconds

Average write latency in milliseconds

Average read latency in milliseconds

Queued Operations (counts)

Read Latency (ms)

Reads (counts/s)

Reads (MB/s)

Write Latency (ms)

Writes (counts/s)

Writes (MB/s)

Aborts (counts/s)

Operations (counts/s)

Reads (counts/s)

Reads (MB/s)

Number of I/O operations per second through the director's front-end ports

Number of queued, outstanding I/O operations on the director's frontend ports

Average time, in milliseconds, that it takes for read requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a read request

Number of read operations per second on the director's front-end ports

Number of bytes per second read from the director's front-end ports

Average time, in milliseconds, that it takes for write requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a write request

Number of write operations per second on the director's front-end ports

Number of bytes per second written to the director's front-end ports

Number of aborted I/O operations per second on the director's backend ports

Number of I/O operations per second through the director's back-end ports

Number of read operations per second by the director's back-end ports

Number of bytes read per second by the director's back-end ports

VPLEX metrics

123

Resource Kinds and Metrics

Metric Group

COM Latency

WAN Link Usage

FC WAN COM

IP WAN COM

Table 51 VPLEX metrics for Director (continued)

Metric Description

Resets (counts/s)

Timeouts (counts/s)

Writes (MB/s)

Average Latency (ms)

Maximum Latency (ms)

Minimum Latency (ms)

Distributed Device Bytes

Received (MB/s)

Distributed Device Bytes Sent

(MB/s)

Distributed Device Rebuild Bytes

Received (MB/s)

Distributed Device Rebuild Bytes

Sent (MB/s)

Bytes Received (MB/s)

Bytes Sent (MB/s)

Number of LUN resets issued per second through the director's backend ports. LUN resets are issued after 20 seconds of LUN unresponsiveness to outstanding operations.

Number of timed out I/O operations per second on the director's backend ports. Operations time out after 10 seconds

Number of bytes written per second by the director's back-end ports

Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director to the specified cluster in the last

5-second interval

Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last

5-second interval

Minimum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last five-second interval

Number of bytes of distributed-device traffic per second received on the director's WAN ports

Number of bytes of distributed-device traffic per second sent on the director's WAN ports

Number of bytes of distributed-device, rebuild/migration traffic per second received on the director's WAN ports

Number of bytes of distributed-device rebuild/migration per second traffic sent on the director's WAN ports

Number of bytes of WAN traffic per second received on this director's

Fibre Channel port

Number of bytes of WAN traffic per second sent on this director's Fibre

Channel port

Packets Received (counts/s)

Packets Sent (counts/s)

Average Latency (ms)

Bytes Received (MB/s)

Bytes Sent (MB/s)

Maximum Latency (ms)

Number of packets of WAN traffic per second received on this director's

Fibre Channel port

Number of packets of WAN traffic per second sent on this director's

Fibre Channel port

Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director's IP port in the last 5-second interval

Number of bytes of WAN traffic per second received on this director's IP port

Number of bytes of WAN traffic per second sent on this director's IP port

Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director's IP port in the last five-second interval

124

EMC Storage Analytics 3.4 Installation and User Guide

Metric Group

Metric Group

Capacity

Status

Resource Kinds and Metrics

Table 51 VPLEX metrics for Director (continued)

Metric Description

Minimum Latency (ms)

Packets Received (counts/s)

Packets Resent (counts/s)

Packets Sent (counts/s)

Minimum time, in milliseconds, that it takes for an inter-director WAN message to complete on this director's IP port in the last five-second interval

Number of packets of WAN traffic per second received on this director's

IP port

Number of WAN traffic packets re-transmitted per second that were sent on this director's IP port

Number of packets of WAN traffic per second sent on this director's IP port

Received Packets Dropped

(counts/s)

Number of WAN traffic packets dropped per second that were received on this director's IP port

Sent Packets Dropped (counts/s) Number of WAN traffic packets dropped per second that were sent on this director's IP port

Table 52 VPLEX metrics for Distributed Device

Metric

Capacity (GB)

Health State

Operational Status

Service Status

Description

Capacity in gigabytes

Possible values include: l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Non-recoverable error

- May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

Critical failure

- VPLEX has marked the object as hardware-dead

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Possible values include: l

Cluster unreachable

- VPLEX cannot reach the cluster; the status is unknown

VPLEX metrics

125

Resource Kinds and Metrics

Metric Group

Metric Group

Status

Metric

Table 52 VPLEX metrics for Distributed Device (continued) l l

Description l l l l

Need resume

- The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.

Need winner

- All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.

Potential conflict

- Clusters have detached each other resulting in a potential for detach conflict.

Running

- Distributed device is accepting I/O

Suspended

- Distributed device is not accepting new I/O; pending

I/O requests are frozen.

Winner-running

- This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.

Table 53 VPLEX metrics for Engine

Metric

Health State

Operational Status

Description

Possible values include: l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Non-recoverable error

- May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

Critical failure

- VPLEX has marked the object as hardware-dead

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

126

EMC Storage Analytics 3.4 Installation and User Guide

Metric Group

Status

Metric Group

Capacity

Status

Metric Group

Status

Resource Kinds and Metrics

Table 54 VPLEX metrics for Ethernet Port

Metric

Operational Status

Description

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Table 55 VPLEX metrics for Extent Device

Metric

Capacity (GB)

Health State

Operational Status

Description

Capacity in gigabytes

Possible values include: l l l l

OK

- The extent is functioning normally

Degraded

- The extent may be out-of-date compared to its mirror

(applies only to extents that are part of a RAID 1 device)

Unknown

- VPLEX cannot determine the extent's operational state, or the state is invalid

Non-recoverable error

- The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device), and/or the health state cannot be determined

Possible values include: l l l l

OK

- The extent is functioning normally

Degraded

- The extent may be out-of-date compared to its mirror

(applies only to extents that are part of a RAID 1 device)

Unknown

- VPLEX cannot determine the extent's operational state, or the state is invalid

Starting

- The extent is not yet ready

Table 56 VPLEX metrics for Fibre Channel Port

Metric

Operational Status

Description

Possible values include: l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

VPLEX metrics

127

Resource Kinds and Metrics

Metric Group

Metric Group

Capacity

Status

Metric

Table 56 VPLEX metrics for Fibre Channel Port l l l

Description

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Table 57 VPLEX metrics for Local Device

Metric

Capacity (GB)

Health State

Operational Status

Service Status

Description

Capacity in gigabytes

Possible values include: l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Non-recoverable error

- May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

Critical failure

- VPLEX has marked the object as hardware-dead

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Possible values include: l l l l l

Cluster unreachable

- VPLEX cannot reach the cluster; the status is unknown

Need resume

- The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.

Need winner

- All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.

Potential conflict

- Clusters have detached each other resulting in a potential for detach conflict.

Running

- Distributed device is accepting I/O

128

EMC Storage Analytics 3.4 Installation and User Guide

Metric Group

Metric Group

Capacity

Status

Metric Group

Capacity

Status

Resource Kinds and Metrics

Metric

Table 57 VPLEX metrics for Local Device (continued)

Description l l

Suspended

- Distributed device is not accepting new I/O; pending

I/O requests are frozen

Winner-running

- This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.

Table 58 VPLEX metrics for Storage Array

Metric Group Metric

Capacity

Description

Allocated Storage

Volumes

Allocated Storage

Volumes (GB)

Number of allocated storage volumes

Gigabytes of allocated storage volumes

Used Storage Volumes Number of used storage volumes

Used Storage Volumes

(GB)

Gigabytes of used storage volumes

Table 59 VPLEX metrics for Storage View

Metric

Virtual Volumes (GB)

Operational Status

Description

Gigabytes of virtual volumes

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Table 60 VPLEX metrics for Storage Volume

Metric

Capacity (GB)

Health State

Description

Capacity in gigabytes

Possible values include: l l

OK

- The storage volume is functioning normally

Degraded

- The storage volume may be out-of-date compared to its mirror

VPLEX metrics

129

Resource Kinds and Metrics

Metric Group Metric

Table 60 VPLEX metrics for Storage Volume (continued)

Operational Status

Description l l l

Unknown

- Cannot determine the health state, or the state is invalid

Non-recoverable error

- May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

Critical failure

- VPLEX has marked the object as hardware-dead

Possible values include: l l l l l l

OK

- Functioning normally

Degraded

- May be out-of-date compared to its mirror (This state applies only to a storage volume that is part of a RAID 1 Metadata

Volume)

Unknown

- Cannot determine the health state, or the state is invalid

Error

- VPLEX has marked the object as hardware-dead

Starting

- Not yet ready

Lost-communication

- Object is unreachable

Metric Group

Capacity

Locality

Status

130

Table 61 VPLEX metrics for Virtual Volume

Metric

Capacity (GB)

Locality

Health State

Operational Status

Description

Capacity in gigabytes

Possible values include: l l l

Local

- The volume is local to the enclosing cluster

Remote

- The volume is made available by a different cluster than the enclosing cluster, and is accessed remotely

Distributed

- The virtual volume has or is capable of having legs at more than one cluster

Possible values include: l l l l

OK

- Functioning normally

Unknown

- Cannot determine the health state, or the state is invalid

Major failure

- One or more of the virtual volume's underlying devices is out-of-date, but will never rebuild

Minor failure

- One or more of the virtual volume's underlying devices is out-of-date, but will rebuild

Possible values include: l l

OK

- Functioning normally

Degraded

- The virtual volume may have one or more out-of-date devices that will eventually rebuild

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Metric Group Metric

Table 61 VPLEX metrics for Virtual Volume (continued)

Service Status

Description l l l l

Unknown

- VPLEX cannot determine the virtual volume's operational state, or the state is invalid

Error

- One or more of the virtual volume's underlying devices is hardware-dead

Starting

- Not yet ready

Stressed

- One or more of the virtual volume's underlying devices is out-of-date and will never rebuild

Possible values include: l l l l l l

Running

- I/O is running

Inactive

- The volume is part of an inactive storage-view and is not visible from the host

Unexported-

The volume is unexported

Suspended

- I/O is suspended for the volume

Cluster-unreachable

- Cluster is unreachable at this time

Need-resume

- Issue re-attach to resume after link has returned

Metric Group

Status

Metric

Health State

Table 62 VPLEX metrics for VPLEX Metro

Operational Status

Description

Possible values include: l l l l l l

OK

- Cluster is functioning normally

Degraded

- Cluster is not functioning at an optimal level. This may indicate non-functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

Unknown

- VPLEX cannot determine the cluster's health state, or the state is invalid

Major failure

- Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of backend connectivity.

Minor failure

- Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.

Critical failure

- Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.

During transition periods, the cluster moves from one operational state to another. Possible values include: l

OK

- Cluster is operating normally

VPLEX metrics

131

Resource Kinds and Metrics

Metric Group Metric

Table 62 VPLEX metrics for VPLEX Metro (continued) l l

Description l l l l l l

Cluster departure

- One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.

Degraded

- Cluster is not functioning at an optimal level. This may indicate non-functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

Device initializing

- If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.

Device out of date

- Child devices are being marked fully out of date. Sometimes this occurs after a link outage.

Expelled

- Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).

Shutdown

- Cluster's directors are shutting down.

Suspended exports

- Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.

Transitioning

- Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).

XtremIO metrics

XtremIO provides metrics for these resource kinds: l l l l l l l

Cluster on page 132

Data Protection Group on page 134

Snapshot on page 134

SSD on page 135

Storage Controller on page 135

Volume on page 136

X-Brick on page 137

Table 63 XtremIO metrics for Cluster

Metric Group

Capacity

Metric

Deduplication Ratio

Description

Ratio of inline data reduction (data written to the array compared to physical capacity used).

132

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 63 XtremIO metrics for Cluster (continued)

Metric Group

Capacity > Physical

Capacity > Volume

Performance

Performance > Read

Operations

Performance > Write

Operations

Status

Metric Description

Compression Ratio

Total Efficiency

Thin Provision Savings

(%)

Data Reduction Ratio

Compression ratio (unique data on the

SSD compared to the physical capacity used)

Amount of disk space saved (based on

Volume Capacity/Physical Space Used).

Disk space actually used compared to disk space allocated.

Amount of disk space used before and after XtremIO deduplication and compression.

Available Capacity (TB) Available capacity in terabytes.

Remaining Capacity (%) Percent of capacity available.

Used Capacity (%) Percent of consumed physical capacity.

Consumed Capacity (TB) Consumed physical capacity in terabytes.

Total Capacity (TB)

Available Capacity (TB)

Total physical capacity in terabytes.

Available volume capacity in terabytes.

Consumed Capacity (TB) Consumed volume capacity in terabytes.

Total Capacity (TB) Total volume capacity in terabytes.

Total Bandwidth (MB/s) Total bandwidth in megabytes per second.

Total Latency (usec)

Total Operations (IO/s)

Total latency in microseconds.

Total input/output operations per second.

Read Bandwidth (MB/s) Total read bandwidth in megabytes per second.

Read Latency (usec)

Reads (IO/S)

Total read latency in microseconds.

Total read input/output operations per second.

Writes (MB/s) Total write bandwidth in megabytes per second.

Write Bandwidth (MB/s) Total write latency in microseconds.

Write Latency (usec)

Health State

Total write input/output operations in microseconds.

Health state of the cluster.

Shared Memory In Use Ratio Level.

Shows information on shared memory utilization levels. Possible values are as follows:

1. Healthy - within normal range: Green

XtremIO metrics

133

Resource Kinds and Metrics

Table 63 XtremIO metrics for Cluster (continued)

Metric Group Metric Description

2. Limited_free_space - ratio exceeds

90%: Yellow

3. Very_limited_space - ratio exceeds

95%: Orange

4. No free space - ratio exceeds 99%:

Red

134

Table 64 XtremIO metrics for Data Protection Group

Metric Group

Performance

Metric

Average SSD Utilization

(%)

Description

SSD utilization as a percentage.

Table 65 XtremIO metrics for Snapshot

Metric Group

Capacity

Performance

Performance > Read

Operations

Metric

Consumed Capacity in

XtremIO (GB)

Consumed Capacity in

VMware (GB)

Description

Consumed capacity in gigabytes without

"zeroed" space.

Consumed capacity in gigabytes, including "zeroed" space.

Note

This metric is available only when a datastore is built on top of the snapshot.

The value of the metric is the consumed datastore capacity, which might not be the same as the consumed snapshot capacity.

Total Capacity (GB)

Average Block Size (KB)

Total Bandwidth (MB/s)

Total Latency (usec)

Total Operations (IOPS)

Unaligned (%)

Average Block Size (KB)

Average Small Reads

(IOPS)

Average Unaligned Reads

(IOPS)

Total capacity in gigabytes.

Average block size in kilobytes.

Total bandwidth in megabytes per second.

Total latency in microseconds.

Total input/output operations per second.

Percentage of unaligned I/O blocks.

Average read block size in kilobytes.

Average small reads in input/output operations per second.

Average unaligned reads in input/output operations per second.

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

Table 65 XtremIO metrics for Snapshot (continued)

Metric Group

Performance > Write

Operations

Tag

Metric Description

Read Bandwidth (MB/s)

Read Latency (usec)

Reads (IOPS)

Average Block Size (KB)

Read bandwidth in megabytes per second.

Read latency in microseconds.

Read input/output operations per second.

Average write block size in kilobytes.

Average Small Writes

(IOPS)

Average Unaligned Writes

(IOPS)

Average small writes in input/output operations per second.

Average unaligned writes in input/output operations per second.

Write Bandwidth (MB/s) Write bandwidth in megabytes per second.

Write Latency (usec)

Writes (IOPS)

Write latency in microseconds.

Write input/output operations per second.

Configuration Can be used in association with Custom

Groups to group or filter XtremIO volumes and snapshots by Tag value.

Table 66 XtremIO metrics for SSD

Metric Group

Capacity

Endurance

Metric Description

Disk Utilization (%) Percentage of disk utilization.

Endurance Remaining (%) Percentage of SSD remaining endurance.

Table 67 XtremIO metrics for Storage Controller

Metric Group

Configuration

Performance

Metric

Encrypted

CPU 1 Utilization (%)

Status

CPU 2 Utilization (%)

Health State

Description

Storage Controller encryption status.

The Storage Controller has two CPU cores. This metric shows the percent utilization of the first core.

The Storage Controller has two CPU cores. This metric shows the percent utilization of the second core.

Storage Controller health status.

XtremIO metrics

135

Resource Kinds and Metrics

Table 68 XtremIO metrics for Volume

Metric Group

Capacity

Performance

Performance > Read

Operations

Performance > Write

Operations

Tag

Metric

Consumed Capacity in

XtremIO (GB)

Consumed Capacity in

VMware (GB)

Total Capacity (GB)

Average Block Size (KB)

Total Bandwidth (MB/s)

Total Latency (usec)

Total Operations (IOPS)

Unaligned (%)

Average Block Size (KB)

Average Small Reads

(IOPS)

Average Unaligned Reads

(IOPS)

Read Bandwidth (MB/s)

Read Latency (usec)

Reads (IOPS)

Average Block Size (KB)

Average Small Writes

(IOPS)

Average Unaligned Writes

(IOPS)

Write Bandwidth (MB/s)

Write Latency (usec)

Writes (IOPS)

Configuration

Description

Consumed capacity in gigabytes without

"zeroed" space.

Consumed capacity in gigabytes, including 'zeroed" space.

Total capacity in gigabytes.

Average block size in kilobytes.

Total bandwidth in megabytes per second.

Total latency in microseconds.

Total input/output operations per second.

Percentage of unaligned I/O blocks.

Average read block size in kilobytes.

Average small reads in input/output operations per second.

Average unaligned reads in input/output operations per second.

Read bandwidth in megabytes per second.

Read latency in microseconds.

Read input/output operations per second.

Average write block size in kilobytes.

Average small writes in input/output operations per second.

Average unaligned writes in input/ output operations per second.

Write bandwidth in megabytes per second.

Write latency in microseconds.

Write input/output operations per second.

Can be used in association with Custom

Groups to group or filter XtremIO volumes and snapshots by Tag value.

136

EMC Storage Analytics 3.4 Installation and User Guide

Table 69 XtremIO metrics for X-Brick

Metric Group

X-Brick

Metric

Reporting

Resource Kinds and Metrics

Description

X-Brick reporting status.

XtremIO metrics

137

Resource Kinds and Metrics

RecoverPoint for Virtual Machines metrics

EMC Storage Analytics provides metrics for RecoverPoint for Virtual Machines resource kinds.

This section contains RecoverPoint for Virtual Machines metrics for the following resource kinds: l l l l l l l l l l l

Cluster on page 138

Consistency Group on page 139

Copy on page 139

Journal Volume on page 139

Link on page 140

Virtual RecoverPoint Appliance (vRPA) on page 140

RecoverPoint for Virtual Machines System on page 140

Replication Set on page 141

Repository Volume on page 141

Splitter on page 141

User Volume on page 141

RecoverPoint metrics for Cluster

Table 70 RecoverPoint metrics for Cluster

Metric Group Metric

Performance

Summary

Additional Information

Incoming Writes (IO/s)

Incoming Writes (MB/s)

Sum of incoming cluster writes from all child vRPAs

Sum of incoming cluster throughput from all child vRPAs

Sum of all child vRPA consistency groups Number of Consistency

Groups

Number of Protected

VMDKs

Sum of user volumes that the cluster protects on all virtual machines, including replica virtual machines

Number of Protected VMs Sum of virtual machines, including replica virtual machines, that the cluster protects

Number of vRPAs Sum of all child vRPAs

138

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

RecoverPoint metrics for Consistency Group

Table 71 RecoverPoint metrics for Consistency Group

Metric Group Metric

Performance

Status

Protection

Additional Information

Incoming Writes (IO/s)

Incoming Writes (MB/s)

Enabled

Current Protection Window

(Hrs)

Current Protection Window

Ratio

Sum of incoming consistency group writes per second

Sum of incoming consistency group writes throughput

Boolean value that indicates the consistency group is enabled

The farthest time in hours for which RecoverPoint can roll back the consistency group's replica copy

Ratio of the current protection window for the consistency group's replica copy as compared with your required protection window

RecoverPoint metrics for Copy

Table 72 RecoverPoint metrics for Copy

Metric Group Metric

Protection

Status

Additional Information

Current Protection Window

(Hrs)

Current Protection Window

Ratio

Active

Enabled

Regulated

Removable

Role

Suspended

The farthest time in hours for which RecoverPoint can roll back the replica copy

Ratio of current protection window for the replica copy as compared with your required protection window

Boolean value indicates if the copy is active

Boolean value indicates if the copy is enabled

Boolean value indicates if the copy is regulated

Boolean value indicates if the copy is removable

Role of the copy, which is retrieved from the role of the consistency group copy settings

Boolean value indicates if the copy is suspended

RecoverPoint metrics for Journal Volume

Table 73 RecoverPoint metrics for Journal Volume

Metric Group

Capacity

Metric

Capacity (GB)

Additional Information

Size of journal volume in GB

RecoverPoint for Virtual Machines metrics

139

Resource Kinds and Metrics

RecoverPoint metrics for Link

Table 74 RecoverPoint metrics for Link

Metric Group

Configuration

Status

Protection

Metric

RPO

RPO Type

Current Compression

Ratio

Current Lag

Current Lag Type

Is In Compliance

Current Lag (%)

Additional Information

The allowed maximum for lag times of consistency group copies

The set type of RPOs to measure

The compression ratio through the link

Current lag time between the copy and production

The type set to measure the current lag time

Exists only with consistency groups in asynchronous replication mode; a yes-no value that indicates if the current lag is in compliance with the RPO

Exists only with consistency groups in asynchronous replication mode; indicates current lag ratio as compared with RPO

RecoverPoint metrics for virtual RecoverPoint Appliance (vRPA)

Table 75 RecoverPoint metrics for virtual RecoverPoint Appliance (vRPA)

Metric Group Metric

Performance CPU Utilization (%)

Additional Information

CPU usage of vRPAs

Summary

Incoming Writes (IO/s)

Incoming Writes (MB/s)

Summary

Note

Utilization values appear as decimals (not percentages). Values can range from 0.0 to 1.0, with a value of 1.0 indicating 100%.

Incoming application writes per second

Incoming application writes for throughput

Number of consistency groups

RecoverPoint metrics for RecoverPoint for Virtual Machines System

Table 76 RecoverPoint metrics for RecoverPoint for Virtual Machines System

Metric Group Metric

Summary Number of RecoverPoint

Clusters

Number of Splitters

Additional Information

Sum of all the clusters in the RecoverPoint system

Sum of all the splitters in the RecoverPoint system

140

EMC Storage Analytics 3.4 Installation and User Guide

Resource Kinds and Metrics

RecoverPoint metrics for Replication Set

Table 77 RecoverPoint metrics for Replication Set

Metric Group

Capacity

Metric

Capacity (GB)

Additional Information

Size of the user volume in GB that the replication set is protecting

RecoverPoint metrics for Repository Volume

Table 78 RecoverPoint metrics for Repository Volume

Metric Group

Capacity

Metric

Capacity (GB)

Additional Information

Size of repository volume in GB

RecoverPoint metrics for Splitter

Table 79 RecoverPoint metrics for Splitter

Metric Group

Summary

Metric

Number of Volumes

Attached

Number of ESX Clusters

Connected

Additional Information

Number of volumes attached to the splitter

Number of clusters connecting to the splitter

RecoverPoint metrics for User Volume

Table 80 RecoverPoint metrics for User Volume

Metric Group Metric

Capacity

Status

Capacity (GB)

Role

Additional Information

Size of user volume

Role of the copy to which the user volume belongs

RecoverPoint for Virtual Machines metrics

141

CHAPTER 5

Views and Reports

This chapter contains the following topics: l l l l l

eNAS views and reports

.......................................................................................144

ScaleIO views and reports

................................................................................... 145

VMAX views and reports

......................................................................................147

VNX and VNXe views and reports

.........................................................................151

XtremIO views and reports

...................................................................................159

Views and Reports

143

Views and Reports

eNAS views and reports

You can create views and reports for the following eNAS metrics.

l l l

eNAS Data Mover (in use) on page 144 eNAS dVol (in use) on page 144 eNAS File Pool (in use) on page 144

l

eNAS File System on page 145

The eNAS report includes all views and can be exported in CSV and PDF formats.

Table 81 eNAS Data Mover (in use)

Metric

Avg. CPU Busy

Max CPU Busy

Avg. Total Network

Bandwidth

Max Total Network

Bandwidth

Type

Unit

Percent

Percent

MB/s

MB/s

String

Description eNAS Data Mover average CPU eNAS Data Mover maximum CPU eNAS Data Mover average total network bandwidth eNAS Data Mover maximum total network bandwidth eNAS Data Mover type

Table 82 eNAS dVol (in use)

Metric Unit

Capacity

Avg. Average Service

Time

GB ms/call

Max Average Service

Time

Avg. Utilization

Max Utilization ms/call

Percent

Percent

Avg. Total Operations IO/s

Max Total Operations IO/s

Avg. Total Bandwidth MB/s

Max Total Bandwidth MB/s

Table 83 eNAS File pool (in use)

Metric

Consumed Capacity

Available Capacity

Unit

GB

GB

Description eNAS dVol capacity

Average eNAS dVol average service time

Maximum eNAS dVol average service time

Average eNAS dVol utilization

Maximum eNAS dVol utilization

Average eNAS dVol total operations

Maximum eNAS dVol total operations

Average eNAS dVol total bandwidth

Maximum eNAS dVol total bandwidth

Description eNAS File Pool consumed capacity eNAS File Pool available capacity

144

EMC Storage Analytics 3.4 Installation and User Guide

Views and Reports

Table 83 eNAS File pool (in use) (continued)

Metric

Total Capacity

Unit

GB

Table 84 eNAS File system

Metric Unit

Total Capacity

Allocated Capacity

GB

GB

Consumed Capacity

Available Capacity

GB

GB

Avg. Total Operations IO/s

Max Total Operations IO/s

Avg. Total Bandwidth MB/s

Max Total Bandwidth MB/s

Description eNAS File Pool total capacity

Description eNAS File System total capacity eNAS File System allocated capacity eNAS File System consumed capacity eNAS File System available capacity eNAS File System average total operations eNAS File System maximum total operations eNAS File System average total bandwidth eNAS File System maximum total bandwidth

ScaleIO views and reports

You can create views and reports for the following ScaleIO components: l l l l

ScaleIO Volume on page 145

ScaleIO Protection Domain on page 146

ScaleIO SDC on page 146

ScaleIO SDS on page 146

Table 85 ScaleIO Volume

Metric Unit

Number of Child

Volumes

Count

Number of Descendant

Volumes

Count

Number of Mapped

SDCs

Count

Volume Size GB

Average Read I/O Size MB

Average Write I/O Size MB

Total Read IO/s IOPS

Total Write IO/s IOPS

Total Reads MB/s

Description

Number of child volumes

Number of descendant volumes

Number of mapped SDCs

Capacity of volume size

Volume average read I/O size

Volume average write I/O size

Volume total read IOPS

Volume total write IOPS

Volume total read MB per second

ScaleIO views and reports

145

Views and Reports

146

Table 85 ScaleIO Volume (continued)

Metric

Total Writes

Unit

MB/s

Table 86 ScaleIO Protection Domain

Metric Unit

Maximum Capacity

Protected Capacity

GB

GB

Snap Used Capacity

Thick Used Capacity

Thin Used Capacity

Unused Capacity

Used Capacity GB

Average Read I/O Size MB

Average Write I/O Size MB

Total Reads IOPS

GB

GB

GB

GB

Total Writes

Total Reads

Total Writes

IOPS

MB/s

MB/s

Table 87 ScaleIO SDC

Metric Unit

Number of Mapped

Volumes

Count

Total Mapped Capacity GB

Average Read I/O Size MB

Average Write I/O Size MB

Total Reads IOPS

Total Writes

Total Reads

Total Writes

IOPS

MB/s

MB/s

Table 88 ScaleIO SDS

Metric

Maximum Capacity

Unit

GB

EMC Storage Analytics 3.4 Installation and User Guide

Description

Volume total write MB per second

Description

Protection domain maximum capacity

Protection domain maximum capacity

Protection domain snap used capacity

Protection domain thick Used capacity

Protection domain thin Used capacity

Protection domain unused capacity

Protection domain used capacity

Protection domain average read I/O size

Protection domain average write I/O size

Protection domain total read IOPS

Protection domain total write IOPS

Protection domain total reads MB per second

Protection domain total writes per second

Description

SDC number of mapped volumes

SDC total mapped capacity

SDC average read I/O size

SDC average write I/O size

SDC total read IOPS

SDC total write IOPS

SDC total reads MB per second

SDC total writes MB per second

Description

SDS maximum capacity

Views and Reports

Table 88 ScaleIO SDS (continued)

Metric Unit

Snap Used Capacity

Thick Used Capacity

Thin Used Capacity

Unused Capacity

GB

GB

GB

GB

Used Capacity GB

Average Read IO Size MB

Average Write IO Size MB

Total Reads IOPS

Total Writes

Total Reads

Total Writes

IOPS

MB/s

MB/s

Description

SDS snap used capacity

SDS thick used capacity

SDS thin used capacity

SDS unused capacity

SDS used capacity

SDS average read IO size

SDS average write IO size

SDS total read IOPS

SDS total write IOPS

SDS total reads MB per second

SDS total writes MB per second

Note

The MDM list view does not contain component-specific metrics.

VMAX views and reports

VMAX reports consist of multiple component list views with the supported VMAX metrics.

The reports can be exported in CSV and PDF formats. You can create the following views and reports: l

VMAX report—Contains the following metrics: l l n n n n

Device

FAST VP Policy

Front-End Director

Front-End Port n n

Storage Group

Thin Pool n

Tier

SRDF report—Contains the following metrics: n n n

Device

R1

R2 n n

Remote Replica Group

SRDF Director n

SRDF Port

VMAX3 report—Contains the following metrics:

VMAX views and reports

147

Views and Reports n n n n n n

Device

Front-End Director

Front-End Port

SLO

Storage Group

Storage Resource Pool

Table 89 VMAX Storage Group list

Metric

Total Capacity

Current Size

Used Capacity

Usable Capacity

Workload

Under Used

Reads

Reads

Writes

Writes

Total Operations

Total Bandwidth

Unit

GB

GB

GB

GB

Percent

Percent

IOPS

MB/s

IOPS

MB/s

IOPS

MB/s

Table 90 VMAX Device

Metric

Total Capacity

Current Size

Used Capacity

Usable Capacity

Workload

Full

Under Used

Reads

Reads

Writes

Writes

Total Operations

Unit

IOPS

MB/s

IOPS

MB/s

IOPS

GB

GB

GB

GB

Percent

Percent

Percent

148

EMC Storage Analytics 3.4 Installation and User Guide

Description

Storage group total capacity

Storage group current size

Storage group used capacity

Storage group usable capacity

Storage group workload

Storage group underused

Storage group reads IOPS

Storage group reads MB per second

Storage group writes IOPS

Storage group writes MB per second

Storage group total operations IOPS

Storage group total bandwidth MB per second

Description

Device total capacity

Device current size

Device used capacity

Device usable capacity

Device workload

Device full

Device underused

Device reads IOPS

Device reads MB per second

Device writes IOPS

Device writes MB per second

Device total operations IOPS

Table 90 VMAX Device (continued)

Metric

Total Bandwidth

Unit

MB/s

Table 91 VMAX Front-End Director

Metric

Reads

Writes

Total Operations

Total Bandwidth

Unit

IOPS

IOPS

IOPS

MB/s

Total Hits IOPS

Table 92 VMAX Front-End Port List

Metric

Total Operations

Total Bandwidth

Unit

IOPS

MB/s

Table 93 VMAX Thin Pool List

Metric

Total Capacity

Current Size

Used Capacity

Usable Capacity

Workload

Full

Under Used

Unit

GB

GB

GB

GB

Percent

Percent

Percent

Table 94 VMAX FAST VP Policy List

Metric Unit

Tier1 Percent in Policy Percent

Tier2 Percent in Policy Percent

Tier3 Percent in Policy Percent

Tier4 Percent in Policy Percent

Views and Reports

Description

Device total bandwidth MB per second

Description

Front-end director reads IOPS

Front-end director writes IOPS

Front-end director total operations IOPS

Front-end director total bandwidth MB per second

Front-end director total Hits IOPS

Description

Front-end port total operations IOPS

Front-end port total bandwidth MB per second

Description

Thin pool total capacity

Thin pool current size

Thin pool used capacity

Thin pool usable capacity

Thin pool workload

Thin pool full

Thin pool underused

Description

FAST VP policy tier 1 percent in policy

FAST VP policy tier 2 percent in policy

FAST VP policy tier 3 percent in policy

FAST VP policy tier 4 percent in policy

VMAX views and reports

149

Views and Reports

150

Table 95 VMAX SRDF Directory List

Metric

Percent Busy

Reads

Writes

Total Operations

Total Bandwidth

SRDFA Writes

SRDFA Writes

SDRFS Writes

SDRFS Writes

Unit

Percent

IOPS

IOPS

IOPS

MB/s

IOPS

MB/s

IOPS

MB/s

Table 96 VMAX Remote Replica Group List

Metric Unit

Avg. Cycle Time

Delta Set Extension

Threshold

Second

Integer

Devices in Session

HA Repeat Writes

Count

Counts/s

Minimum Cycle Time Second

Writes IOPS

Writes MB/s

Table 97 VMAX Storage Resource Pool

Metric

Total Capacity

Current Size

Used Capacity

Usable Capacity

Workload

Full

Under Used

Reads

Reads

Writes

Unit

GB

GB

GB

GB

Percent

Percent

Percent

IOPS

MB/s

IOPS

EMC Storage Analytics 3.4 Installation and User Guide

Description

SRDF director percent busy

SRDF director reads IOPS

SRDF director writes IOPS

SRDF director total operations IOPS

SRDF director total bandwidth MB per second

SRDF director SRDFA writes IOPS

SRDF director SRDFA writes MB per second

SRDF director SRDFS writes IOPS

SRDF director SRDFS writes MB per second

Description

Remote replica group average cycle time

Remote replica group delta set extension threshold

Remote replica group devices in session

Remote replica group HA repeat writes

Remote replica group minimum cycle time

Remote replica group writes IOPS

Remote replica group writes MB per second

Description

Storage resource pool total capacity

Storage resource pool current size

Storage resource pool used capacity

Storage resource pool usable capacity

Storage resource pool workload

Storage resource pool full

Storage resource pool underused

Storage resource pool reads IOPS

Storage resource pool reads MB per second

Storage resource pool writes IOPS

Views and Reports

Table 97 VMAX Storage Resource Pool (continued)

Metric

Writes

Total Operations

Unit

MB/s

IOPS

Description

Storage resource pool writes MB per second

Storage resource pool total operations IOPS

Note

The current list views of Tier, SRDF Port, R1, R2, and SLO do not contain any componentspecific metrics.

VNX and VNXe views and reports

You can create views and reports for VNX and VNXe resources. Several predefined views and templates are also available.

Report templates

The following predefined report templates consist of several list views under the adapter instance: l

VNX Block Report l l n n n

Alerts

VNX Storage Pool (in use) list

VNX RAID Group (in use) list n n

VNX LUN list

VNX Disk (in use) list n

VNX SP Front-End Port list

VNX File Report n n n

Alerts

VNX Data Mover (in use) list

VNX File Pool (in use) list n n

VNX File System list

VNX dVol (in use) list

VNXe Report n n n n n

Alerts

VNXe Storage Pool (in use) list

VNXe LUN list

VNXe File System list

VNXe Disk (in use) list

Predefined views

The following sections describe the available predefined views: l l

Alerts on page 152

VNX Data Mover on page 152

VNX and VNXe views and reports

151

Views and Reports

152 l l l l l l l l l l l l l l l

VNX File System on page 153

VNX File Pool on page 151

VNX dVol on page 154

VNX LUN on page 154

VNX Tier on page 155

VNX FAST Cache on page 155

VNX Storage Pool on page 155

VNX Disk on page 155

VNX Storage Processor on page 156

VNX Storage Processor Front End Port on page 156

VNX RAID Group on page 157

VNXe File System on page 157

VNXe LUN on page 157

VNXe Tier on page 158

VNXe Storage Pool on page 158

l l

VNXe Disk on page 158

VNXe Storage Processor on page 159

Alert definitions apply to all resources.

Table 98 Alerts

Metric Description

Criticality level The criticality level of the alert—Warning, Immediate, or Critical

Object name Name of the impacted object

Object kind Resource kind of the impacted object

Alert impact Impacted badge (Risk, Health, or Efficiency) of the alert

Start time Start time of the alert

Table 99 VNX Data Mover

Metric group Metric

CPU

Network

Busy (%)

NFS Reads (MB/s)

NFS Writes (MB/s)

NFS Total Bandwidth (MB/s)

In Bandwidth (MB/s)

Out Bandwidth (MB/s)

Total Bandwidth (MB/s)

NFS Reads (IO/s)

NFS Writes (IO/s)

EMC Storage Analytics 3.4 Installation and User Guide

Description

VNX Data Mover CPU busy trend

VNX Data Mover NFS bandwidth trend

VNX Data Mover network bandwidth trend

VNX Data Mover NFS IOPS trend

Views and Reports

Table 99 VNX Data Mover (continued)

Metric group Metric

CPU

Network

NFS Total Operations (IO/s)

% Busy - Average

% Busy - Max

Total Network Bandwidth - Average

(MB/s)

Total Network Bandwidth - Max

(MB/s)

Configuration Data Mover Type

Description

VNX Data Mover (in use)

Table 100 VNX File System

Metric group Metric Description

Performance Total Operations (IO/s)

Reads (IO/s)

Writes (IO/s)

Total Bandwidth (MB/s)

Reads (MB/s)

Capacity

Capacity

Writes (MB/s)

Consumed Capacity (GB)

Total Capacity (GB)

Total Capacity (GB)

Allocated Capacity (GB)

Consumed Capacity (GB)

Available Capacity (GB)

Performance Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

VNX file system IOPS trend

VNX file system bandwidth trend

VNX file system capacity trend

VNX file system List

Table 101 VNX File Pool

Metric group Metric

Capacity

Capacity

Description

Consumed Capacity (GB) VNX file pool capacity trend

Total Capacity (GB)

Available Capacity (GB) VNX file pool (in use) list

VNX and VNXe views and reports

153

Views and Reports

154

Table 101 VNX File Pool (continued)

Metric group Metric

Consumed Capacity (GB)

Total Capacity (GB)

Description

Table 102 VNX dVol

Metric group Metric Description

Performance Utilization (%) VNX dVol utilization trend

Performance Total Operations (IO/s)

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s)

VNX dVol IOPS trend

VNX dVol bandwidth trend

Reads (MB/s)

Writes (MB/s)

Capacity Capacity (GB)

Performance Avg. Average Service Time (uSec/call)

VNX dVol (in use) list

Max Average Service Time (uSec/call)

Avg. Utilization (%)

Max Utilization (%)

Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Table 103 VNX LUN

Metric group Metric Description

Performance Total Operations (IO/s)

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s)

Reads (MB/s)

Writes (MB/s)

Performance Total Latency (ms)

VNX LUN IOPS trend

VNX LUN bandwidth trend

VNX LUN total latency trend

Performance Avg. Total Operations (IO/s) VNX LUN list

Max Total Operations (IO/s)

EMC Storage Analytics 3.4 Installation and User Guide

Views and Reports

Table 103 VNX LUN (continued)

Metric group Metric

Capacity

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Avg. Total Latency (ms)

Max Total Latency (ms)

Total Capacity (GB)

Description

Table 104 VNX Tier

Metric group Metric

Capacity

Description

Consumed Capacity (GB) VNX Tier capacity trend

Total Capacity (GB)

Table 105 VNX FAST Cache

Metric group Metric Description

Performance Read Cache Hit Ratio (%) VNX FAST Cache hit ratio trend

Write Cache Hit Ratio (%)

Table 106 VNX Storage Pool

Metric group Metric Description

Capacity

Capacity

Consumed Capacity (GB) VNX storage pool capacity trend

Total Capacity (GB)

Available Capacity (GB) VNX storage pool (in use) List

Consumed Capacity (GB)

Full (%)

Subscribed (%)

Configuration LUN Count

Table 107 VNX Disk

Metric group Metric

Performance Total Operations (IO/s)

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s)

Description

VNX disk IOPS trend

VNX disk bandwidth (MB/s) trend

VNX and VNXe views and reports

155

Views and Reports

156

Table 107 VNX Disk (continued)

Metric group Metric Description

Reads (MB/s)

Writes (MB/s)

Performance Total Latency (ms)

Performance Busy (%)

VNX disk Total Latency (ms) trend

VNX disk busy (%) trend

Capacity Capacity (GB)

Performance Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

VNX disk (in use) List

Max Total Bandwidth (MB/s)

Avg. Total Latency (ms)

Max Total Latency (ms)

Avg. Busy (%)

Max Busy (%)

Configuration Type

Table 108 VNX Storage Processor

Metric group Metric

CPU

Disk

Disk

Description

CPU Busy (%) VNX storage processor CPU busy trend

Disk Total Operations (IO/s) VNX storage processor disk IOPS trend

Disk Reads (IO/s)

Disk Writes (IO/s)

Disk Total Bandwidth (MB/s) VNX storage processor disk bandwidth trend

Disk Reads (MB/s)

Disk Writes (MB/s)

Table 109 VNX Storage Processor Front End Port

Metric group Metric

Performance Total Operations (IO/s)

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s)

Reads (MB/s)

Writes (MB/s)

Description

VNX SP front end port IOPS trend

VNX SP front end port bandwidth trend

EMC Storage Analytics 3.4 Installation and User Guide

Views and Reports

Table 109 VNX Storage Processor Front End Port (continued)

Metric group Metric Description

Performance Avg. Total Operations (IO/s) VNX SP front end port List

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Table 110 VNX RAID Group

Metric group Metric Description

Capacity Available Capacity (GB) VNX RAID group (in use) list

Total Capacity (GB)

Full (%)

Configuration Disk Count

LUN Count

Max Disks

Max LUNs

Table 111 VNXe File System

Metric group Metric

Capacity

Capacity

Description

Consumed Capacity (GB) VNXe file system capacity trend

Total Capacity (GB)

Total Capacity (GB)

Allocated Capacity (GB)

VNXe file system List

Consumed Capacity (GB)

Available Capacity (GB)

Table 112 VNXe LUN

Metric group Metric Description

Performance Reads (IO/s)

Writes (IO/s)

Performance Reads (MB/s)

Writes (MB/s)

VNXe LUN IOPS trend

VNXe LUN bandwidth trend

Capacity Total Capacity (GB) VNXe LUN List

Performance Avg. Reads (IO/s)

VNX and VNXe views and reports

157

Views and Reports

Table 112 VNXe LUN (continued)

Metric group Metric

Max Reads (IO/s)

Avg. Writes (IO/s)

Max Writes (IO/s)

Avg. Reads (MB/s)

Max Reads (MB/s)

Avg. Writes (MB/s)

Max Writes (MB/s)

Description

Table 113 VNXe Tier

Metric group Metric

Capacity

Description

Consumed Capacity (GB) VNXe tier capacity trend

Total Capacity (GB)

Table 114 VNXe Storage Pool

Metric group Metric

Capacity

Capacity

Description

Consumed Capacity (GB) VNXe storage pool capacity trend

Total Capacity (GB)

Consumed Capacity (GB) VNXe storage pool (in use) List

Total Capacity (GB)

Full (%)

Subscribed (%)

Table 115 VNXe Disk

Metric group Metric

Performance Reads (IO/s)

Writes (IO/s)

Performance Reads (MB/s)

Writes (MB/s)

Performance Busy (%)

Capacity Size (GB)

Performance Avg. Reads (IO/s)

Max Reads (IO/s)

Description

VNXe disk IOPS trend

VNXe disk bandwidth

VNXe disk busy trend

VNXe disk (in use) list

158

EMC Storage Analytics 3.4 Installation and User Guide

Table 115 VNXe Disk (continued)

Metric group Metric

Avg. Writes (IO/s)

Max Writes (IO/s)

Avg. Reads (MB/s)

Max Reads (MB/s)

Avg. Writes (MB/s)

Max Writes (MB/s)

Avg. Busy (%)

Max Busy (%)

Configuration Type

Description

Table 116 VNXe Storage Processor

Metric group Metric Description

Performance Busy (%)

Performance Reads (IO/s)

VNXe storage processor busy trend

VNXe storage processor IOPS trend

Writes (IO/s)

Performance Reads (MB/s)

Network

VNXe storage processor bandwidth trend

Writes (MB/s)

NFS Reads (IO/s) VNXe storage processor NFS IOPS trend

Network

NFS Writes (IO/s)

NFS Reads (MB/s) VNXe storage processor NFS bandwidth trend

NFS Writes (MB/s)

Views and Reports

XtremIO views and reports

You can create views and reports for the following XtremIO metrics: l l l l

Cluster capacity consumption summary on page 160

Health state on page 160

LUN list on page 160

Performance summary on page 161

l

Storage efficiency summary on page 161

The XtremIO report includes all views and can be exported in CSV and PDF formats.

XtremIO views and reports

159

Views and Reports

Table 117 XtremIO cluster capacity consumption

Metric

Available Capcity

Consumed Capacity

Total Capacity

Available Capacity

Consumed Capacity

Total Capacity

Unit

TB, physical

TB, physical

TB, physical

TB, volume

TB, volume

TB, volume

Description

Available physical cluster capacity

Consumed physical cluster capacity

Total physical cluster capacity

Available cluster volume capacity

Consumed cluster volume capacity

Total cluster volume capacity

Table 118 XtremIO health state

Metric Description

Cluster health state Overall health of the cluster

Storage Controller Health State Overall health state of the Storage Controller and its contained components

Table 119 XtremIO LUN

Metric

Read Bandwidth

Unit

MB/s

Read Latency

Reads

Write Bandwidth

Write Latency

Write

Total Bandwidth

Total Latency

Total operations

Consumed Capacity in VMware GB

IOPS

MB/s ms

IOPS

Consumed Capacity in XtremIO GB

Total Capacity

Summary

GB ms

IOPS

MB/s ms

Description

Volume|Performance:Read Operations|Read

Bandwidth

Volume|Performance:Read Operations|Read

Latency

Volume|Performance:Read Operations|Reads

Volume|Performance:Write Operations|Write

Bandwidth

Volume|Performance:Write Operations|Write

Latency

Volume|Performance:Write Operations|Write

Volume|Performance |Total Bandwidth

Volume|Performance |Total Latency

Volume|Performance|Total Operations

Volume|Capacity| Consumed Capacity in

VMware

Volume|Capacity| Consumed Capacity in

XtremIO

Volume|Capacity|Total Capacity

Max, Min, Average

160

EMC Storage Analytics 3.4 Installation and User Guide

Views and Reports

Table 120 XtremIO performance

Metric

Read Bandwidth

Unit

MB/s

Read Latency

Reads

Write Bandwidth ms

IOPS

MB/s

Write Latency

Write

Total Bandwidth

Total Latency

Total Operations

CPU 1 Utilization

CPU 2 Utilization

Summary ms

IOPS

MB/s ms

IOPS

Percent

Percent

Table 121 XtremIO storage efficiency

Metric

Deduplication Ratio

Compression Ratio

Thin provision Savings

SSD endurance Remaining

Disk Utilization

Summary

Unit

Percent

Percent

Percent

Description

Cluster|Performance:Read Operations|Read

Bandwidth

Cluster|Performance:Read Operations|Read

Latency

Cluster|Performance:Read Operations|Reads

Cluster|Performance:Write Operations|Write

Bandwidth

Cluster|Performance:Write Operations|Write

Latency

Cluster|Performance:Write Operations|Write

Cluster|Performance |Total Bandwidth

Cluster|Performance |Total Latency

Cluster|Performance|Total Operations

Storage Controller | Performance | CPU 1

Utilization

Storage Controller | Performance | CPU 2

Utilization

Max, Min, Average

Description

Cluster|Capacity|Deduplication Ratio

Cluster|Capacity|Compression Ratio

Cluster|Capacity|Thin Provision Savings

SSD|Endurance|Endurance Remaining

SSD|Capacity|Disk Utilization

Average

XtremIO views and reports

161

CHAPTER 6

Remedial Actions on EMC Storage Systems

This chapter contains the following topics: l l l l l l l l l l l l l l l l

Remedial actions overview

.................................................................................. 164

Clearing matrix queries on vVNX

..........................................................................164

Changing the service level objective (SLO) for a VMAX3 storage group

.................164

Changing the tier policy for a VNXe File system

....................................................165

Changing the tier policy for a VNX or VNXe LUN

....................................................165

Expanding VMAX devices

.................................................................................... 165

Extending file system capacity on VNXe storage

.................................................. 166

Enabling performance statistics for VNX Block

.....................................................166

Enabling FAST Cache on VNXe storage pools

....................................................... 166

Enabling FAST Cache on a VNX Block storage pool

...............................................167

Expanding LUN capacity on VNX or VNXe

............................................................. 167

Extending file system capacity on VNX or eNAS storage

....................................... 168

Migrating a VNX LUN to another storage pool

.......................................................168

Rebooting a Data Mover on VNX storage

..............................................................168

Rebooting a VNX storage processor

..................................................................... 169

Extending volumes on EMC XtremIO storage systems

.......................................... 169

Remedial Actions on EMC Storage Systems

163

Remedial Actions on EMC Storage Systems

Remedial actions overview

Various remedial actions are available in vRealize Operations Manager, depending on the storage system. The Actions menu is available on the storage system's resource page, and remedial actions can also be initiated from the details page for an alert.

For these actions to be available, ensure that the Management Pack for EMC storage systems (EMC Adapter) is installed and the EMC Adapter instances are configured.

Other requirements: l l

The EMC Adapter instances require the use of Admin credentials on the storage array.

The vRealize Operations Manager user must have an Admin role that can access the

Actions menu.

Clearing matrix queries on vVNX

This action is available as a recommendation when an alert is generated for a vVNX array.

Procedure

1. Under Recommendations, click Clear Metric Queries.

2. In the Clear Metric Queries dialog box, click Begin Action.

Results

The metrics are cleared and data collection resumes.

Changing the service level objective (SLO) for a VMAX3 storage group

This action is available from the Actions menu when a VMAX3 storage group is selected.

Procedure

1. From the summary page of a VMAX3 storage group, click Actions

>

Change SLO.

2. In the Change SLO dialog box, provide the following information:

Option

New SLO

New Workload

Description

New SLO for the storage group

New workload type for the storage group

3. Click OK.

Results

The SLO for the storage group is changed.

164

EMC Storage Analytics 3.4 Installation and User Guide

Remedial Actions on EMC Storage Systems

Changing the tier policy for a VNXe File system

This action is available in the Actions menu when you select a VNXe File system on the

Summary tab.

Procedure

1. From the File system's Summary page, click Actions

>

Change VNXe FileSystem Tiering

Policy.

2. In the dialog box, select a tiering policy and click Begin Action.

Results

The policy is changed. You can check the status under Recent Tasks.

Changing the tier policy for a VNX or VNXe LUN

This action is available from the Actions menu when a VNX or VNXe LUN is selected on the Summary tab.

Procedure

1. From the Summary tab of a VNX or VNXe LUN, click Action

>

Change Tiering Policy.

2. In the Change Tiering Policy dialog box, select a tiering policy and click Begin Action.

Results

The policy is changed. You can check the status under Recent Tasks.

Expanding VMAX devices

This action is available from the Actions menu when a VMAX device is selected.

The following restrictions exist for this action: l l l

ESA can expand only thin devices on the VMAX.

Expanding VMAX devices of type RDFx+TDEV or BCV+TDEV is not currently supported.

Expanding VMAX3 devices is not currently supported.

Procedure

1. From the summary page of a VMAX device, click Actions

>

Expand Device.

2. In the Expand Device dialog box, type the additional capacity in megabytes to be added to the device.

3. Click OK.

Results

The capacity of the device is expanded by the amount specified.

Changing the tier policy for a VNXe File system

165

Remedial Actions on EMC Storage Systems

Extending file system capacity on VNXe storage

This action is available from the Actions menu when a VNXe file system is selected or under a recommended action when a file system's used capacity is high.

Procedure

1. Do one of the following: l

Select a VNXe file system and click Actions > Extend VNXe File System.

l

From the alert details window for a VNXe file system, click Extend VNXe File

System.

2. In the Extend VNXe File System dialog box, type a number in the New Size text box, and then click OK.

3. Click OK in the status dialog box.

Results

The file system size is increased and the alert (if present) is cancelled.

Enabling performance statistics for VNX Block

This action is available only as a recommended action when an error or warning occurs on a VNX Block array. It is never available from the vRealize Operations Manager Actions menu.

Procedure

1. From the Summary page of the VNX Block array that reports an error or warning, click

Enable Statistics.

2. In the Enable Statistics dialog box, click OK.

Results

You can confirm the action by checking the Message column under Recent Tasks.

Enabling FAST Cache on VNXe storage pools

This action is available from the Actions menu when a VNXe storage pool is selected and

FAST Cache is enabled and configured.

Procedure

1. Under Details for the VNXe storage pool, select Actions

>

Configure FAST Cache.

2. In the Configure FAST Cache dialog box, click Begin Action.

Results

FAST Cache is enabled. You can check the status under Recent Tasks.

166

EMC Storage Analytics 3.4 Installation and User Guide

Remedial Actions on EMC Storage Systems

Enabling FAST Cache on a VNX Block storage pool

This action is available from the Actions menu when a VNX Block storage pool is selected or as a recommended action when FAST Cache is configured and available.

Procedure

1. Select the Summary tab for a VNX Block storage pool.

2. Do one of the following: l l

From the Actions menu, select Enable FAST Cache.

Under Recommendations, click Configure FAST Cache.

3. In the Configure FAST Cache dialog box, click OK.

Results

FAST Cache is enabled. You can check the status under Recent Tasks.

Expanding LUN capacity on VNX or VNXe

This action is available from the Actions menu when a VNX or VNXe LUN is selected.

Procedure

1. Select a VNX or VNXe LUN.

2. Under Actions, click Expand.

3. Type the new size and select the size qualifier.

4. Click Begin Action.

Results

The LUN is expanded. You can check the status under Recent Tasks.

Enabling FAST Cache on a VNX Block storage pool

167

Remedial Actions on EMC Storage Systems

Extending file system capacity on VNX or eNAS storage

This action is available from the Actions menu when a VNX or eNAS file system is selected or under a recommended action when a file system's used capacity is high.

Procedure

1. Do one of the following: l

Select a VNX or eNAS file system and click Actions

>

Extend File System.

l

From the alert details window for a VNX or eNAS file system, click Extend File

System.

2. In the Extend File System dialog box, type a number in the New Size text box, and then click OK.

3. Click OK in the status dialog box.

Results

The file system size is increased and the alert (if present) is cancelled.

Migrating a VNX LUN to another storage pool

This action is available from the vRealize Operations Manager Actions menu.

Procedure

1. From the Summary page of the VNX LUN, click Actions

>

Migrate.

2. In the Migrate dialog box, provide the following information: l l l

Storage Pool Type: Select Pool or RAID Group.

Storage Pool Name: Type the name of the pool to migrate to.

Migration Rate: Select Low, Medium, High, or ASAP.

3. Click OK.

Results

The LUN is migrated.

Rebooting a Data Mover on VNX storage

This action is available from the Actions menu when a VNX Data Mover is selected or under a recommended action when the health state of the Data Mover has an error.

Procedure

1. Do one of the following: l l

Select a VNX Data Mover and click Actions

>

Reboot Data Mover.

From the alert details window for a VNX Data Mover, click Reboot Data Mover.

2. In the Reboot Data Mover dialog box, click OK.

Results

The Data Mover is restarted and the alert is cancelled.

168

EMC Storage Analytics 3.4 Installation and User Guide

Remedial Actions on EMC Storage Systems

Rebooting a VNX storage processor

This action is available from the Actions menu on the Summary tab for the storage processor or as a recommendation when the storage processor cannot be accessed.

Procedure

1. Do one of the following: l l

On the Summary tab for the storage processor, click Actions

>

Reboot Storage

Processor.

Under Recommendations, click Reboot Storage Processor.

2. In the Reboot Storage Processor dialog box, click Begin Action.

Results

The storage processor is restarted. This could take several minutes. You can check the status under Recent Tasks.

Extending volumes on EMC XtremIO storage systems

This action is available from the Actions menu when a Volume is selected or under a recommended action when a Volume's used capacity is high.

Procedure

1. Do one of the following: l

Select an XtremIO volume and click Actions

>

Extend Volume.

l

From the alert details window for an XtremIO volume, click Extend Volume.

2. In the Extend Volume dialog box, type a number in the New Size text box, and then click OK.

3. Click OK in the status dialog box.

Results

The volume size is increased and the alert (if present) is cancelled.

Rebooting a VNX storage processor

169

CHAPTER 7

Troubleshooting

This chapter contains the following topics: l l l l l l l l l l l l l

Badges for monitoring resources

.........................................................................172

Navigating inventory trees

...................................................................................172

Symptoms, alerts, and recommendations for EMC Adapter instances

.................. 173

Event correlation

................................................................................................. 175

Launching Unisphere

.......................................................................................... 200

Installation logs

.................................................................................................. 200

Log Insight overview

............................................................................................201

Error handling and event logging

......................................................................... 204

Log file sizes and rollover counts

.........................................................................205

Editing the Collection Interval for a resource

........................................................207

Configuring the thread count for an adapter instance

.......................................... 207

Connecting to vRealize Operations Manager by using SSH

...................................208

Frequently asked questions

.................................................................................209

Troubleshooting

171

Troubleshooting

Badges for monitoring resources

This topic describes the use of vRealize Operations Manager badges to monitor EMC

Storage Analytics resources.

vRealize Operations Manager enables you to analyze capacity, workload, and stress of supported resource objects. vRealize Operations Manager badges are available for these

EMC products: VNX Block, VNX File, VNXe, and VMAX.

The badges include:

Workload

The Workload badge defines the current workload of a monitored resource. It displays a breakdown of the workload based on supported metrics.

Stress

The Stress badge is similar to the Workload badge but defines the workload over a period of time. The Stress badge displays one-hour time slices over the period of a week. The color of each slice reflects the stress status of the resource.

Capacity

The Capacity badge displays the percentage of a resource that is currently consumed and the remaining capacity for the resource.

Note

Depending on the resource and supported metrics, full capacity is sometimes defined as 100% (for example, Busy %). Full capacity can also be defined by the maximum observed value (for example, Total Operations IO/s).

Time Remaining

This badge is calculated from the Capacity badge and estimates when the resource will reach full capacity.

The badges are based on a default policy that is defined in vRealize Operations Manager for each resource kind.

Navigating inventory trees

This topic describes how to navigate vRealize Operations Manager inventory trees for

EMC resource objects.

Navigating inventory trees in vRealize Operations Manager can help you to troubleshoot problems you encounter with EMC resources.

Note

vRealize Operations Manager inventory trees are available for these EMC products: VNX

Block, VNX File, VNXe, VMAX, and VMAX3.

Procedure

1. Log into vRealize Operations Manager.

2. Open the Environment Overview.

3. Locate Inventory Trees.

172

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

4. Click the tree name to view its nodes. Click > to expand the list to view objects under the selected node.

Symptoms, alerts, and recommendations for EMC Adapter instances

This topic describes the symptoms, alerts, and recommendations that are displayed in vRealize Operations Manager for EMC Adapter instances.

Note

You can view symptoms, alerts, and recommendations in vRealize Operations Manager for these EMC products: VNX Block, VNX File, VNXe, VMAX, , VPLEX, XtremIO, and

RecoverPoint for Virtual Machines.

You can view symptoms, alerts, and recommendations for EMC Adapter instances through the vRealize Operations Manager GUI. EMC Storage Analytics generates the alerts, which appear with other alerts that VMware generates. EMC Storage Analytics defines the alerts, symptoms, and recommendations for resources that the EMC Adapter instance monitors. You can view the symptoms, alerts, and recommendations in these vRealize Operations Manager windows.

Home dashboard

The vRealize Operations Manager home page dashboard displays EMC Storage

Analyticssymptoms, alerts, and recommendations along with VMware-generated alerts. You can view health, risk, and efficiency alerts, listed in order of severity.

Alerts Overview

You can view EMC Storage Analytics alerts along with VMware-generated alerts in the

Alerts Overview window. In this view, vRealize Operations Manager groups the alerts in health, risk, and efficiency categories.

Alert Details

This vRealize Operations Manager view displays detailed properties of a selected alert. Properties include title, description, related resources, type, subtype, status, impact, criticality, and alert start time. This view also shows the symptoms that triggered the alert as well as recommendations for responding to the alert.

Summary

In the Summary view for resource details, vRealize Operations Manager displays the alerts for the selected resource. It also displays alerts for the children of the selected resource, which affect the badge color of the selected resource.

Symptom definition

You can find symptom definitions for EMC Storage Analytics-generated alerts in the

Definitions Overview (configuration page). Each definition includes the resource kind, metric key, and lists EMC Adapter as the Adapter Kind.

Recommendations

You can find the recommendation descriptions for EMC Storage Analytics-generated alerts in the Recommendations Overview (configuration page).

Symptoms, alerts, and recommendations for EMC Adapter instances

173

Troubleshooting

Alert definition

You can find alert definitions for EMC Storage Analytics-generated alerts in the Alert

Definitions Overview (configuration page). Each definition includes the resource kind, type of alert, criticality, and impact (health, risk, or efficiency alert).

174

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Event correlation

Event correlation enables users to correlate alerts with the resources that generate them.

Event correlation is available for: l l

VNX Block

VNX File

EMC Adapter instances registered with the vRealize Operations Manager monitor events on select resources. These events appear as alerts in vRealize Operations Manager. The events are associated with the resources that generate them and aid the user in troubleshooting problems that may occur.

vRealize Operations Manager manages the life cycle of an alert and will cancel an active alert based on its rules. For example, vRealize Operations Manager may cancel an alert if

EMC Storage Analytics no longer reports it.

vRealize Operations Manager-generated events influence the health score calculation for select resources. For example, in the RESOURCE:DETAILS pane for a selected resource, vRealize Operations Manager-generated events that contribute to the health score appear as alerts.

vRealize Operations Manager only generates events and associates them with the resources that triggered them. vRealize Operations Manager determines how the alerts appear and how they affect the health scores of the related resources.

Note

When a resource is removed, vRealize Operations Manager automatically removes existing alerts associated with the resource, and the alerts no longer appear in the user interface.

Viewing all alerts

This procedure shows you how to view a list of all the alerts in the vRealize Operations

Manager system.

Procedure

1. Log into the vRealize Operations Manager user interface.

2. From the vRealize Operations Manager menu, select ALERTS

>

ALERTS OVERVIEW.

A list of alerts appears in the ALERTS OVERVIEW window.

3. (Optional) To refine your search, use the tools in the menu bar. For example, select a start and end date or enter a search string.

4. (Optional) To view a summary of information about a specific alert, select the alert and double-click it.

The ALERT SUMMARY window appears and provides reason, impact, and root cause information for the alert.

Event correlation

175

Troubleshooting

Finding resource alerts

An alert generated by EMC Storage Analytics is associated with a resource. This procedure shows you how to find an alert for a specific resource.

Procedure

1. Log into the vRealize Operations Manager user interface.

2. Select the resource from one of the dashboard views.

The number that appears on the alert icon represents the number of alerts for this resource.

3. Click the Show Alerts icon on the menu bar to view the list of alerts for the resource.

Alert information for the resource appears in the popup window.

Locating alerts that affect the health score for a resource

This procedure shows how to locate an alert that affects the health score of a resource.

Different types of alerts can contribute to the health score of a resource, but a resource with an abnormal health score might not have triggered the alert. For example, the alert might be triggered by a parent resource. To locate an alert that affects the health score of a resource:

Procedure

1. Log into the vRealize Operations Manager user interface.

2. View the RESOURCE DETAIL window for a resource that shows an abnormal health score.

Events that contributed to the resource health score appear in the ROOT CAUSE

RANKING pane.

3. Click an event to view the event details and investigate the underlying cause.

List of alerts and notifications

EMC Storage Analytics generates the listed events when the resources are queried.

This section provides the following information: l l l l l l l l l l

ScaleIO alerts on page 176

VMAX alerts on page 179

VNX Block alerts on page 180

VNX Block notifications on page 184

VNX File alerts on page 186

VNX File notifications on page 188

VNXe alerts on page 192

VPLEX alerts on page 194

XtremIO alerts on page 197

RecoverPoint alerts on page 198

176

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

ScaleIO alerts

Table 122 System alerts

Metric

Used Capacity

Thick Used Capacity

Thin Used Capacity

Snap Used Capacity

Badge Severity Condition

Risk Critical

Warning

> 95

>85

Critical

Warning

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

> 95

>85

Table 123 Protection Domain alerts

Metric

Status

Used Capacity

Thick Used Capacity

Thin Used Capacity

Snap Used Capacity

Badge Severity Condition

Health Critical No Active

Risk Critical

Warning

> 95

>85

Critical

Warning

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

> 95

>85

Table 124 Device/Disk alerts

Metric

Status

Used Capacity

Spare Capacity Allocated

Thick Used Capacity

Thin Used Capacity

Protected Capacity

Badge Severity Condition

Health Critical -> Error, Info -> {Remove, Pending}

Risk Critical

Warning

> 95

>85

Critical

Warning

Critical

Warning

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

> 95

>85

> 95

>85

List of alerts and notifications

177

Troubleshooting

Table 125 SDS alerts

Metric

Status

Used Capacity

Thick Used Capacity

Thin Used Capacity

Protected Capacity

Note: Not available from REST API

Snap Used Capacity

Badge Severity Condition

Health Critical Disconnected

Risk Critical

Warning

> 95

>85

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

Table 126 Storage Pool alerts

Metric Badge Severity Condition

Status

Note: Not available from REST API

Health Critical

Warning

Warning

Warning

Warning

Degraded capacity

Unreachable capacity

Unavailable unused capacity

Extremely unbalanced

Unbalanced

Used Capacity Risk

Thick Used Capacity

Thin Used Capacity

Protected Capacity

Snap Used Capacity

Critical

Warning

Critical

Warning

Critical

Warning

Critical

Warning

Critical

Warning

> 95

>85

> 95

>85

> 95

>85

> 95

>85

> 95

>85

Table 127 SDC alerts

Metric Badge Severity Condition

State Health Critical Disconnected

178

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Table 128 MDM Cluster alerts

Metric Badge Severity Condition

State Health Critical Not clustered

Clustered degraded

Clustered tie breaker down

Clustered degraded tie breaker down

VMAX alerts

The Wait Cycle is 1 for all these VMAX alerts.

Table 129 VMAX alerts

Badge Severity Condition Resource kind

Device

Symptom

VmaxDevice_percent_full98.0

VmaxDevice_percent_full95.0

Risk

Risk

SRP (VMAX3

Storage

Resource

Pool)

VmaxSRPStoragePool_percent_full98.0

Risk

VmaxSRPStoragePool_percent_full95.0

Risk

Thin Pool

(VMAX)

VmaxThinPool_percent_full98.0

Risk

VmaxThinPool_percent_full95.0

Risk

Front-End Port VmaxPort_Operational_Status_Predictive

_failure

Risk

VmaxPort_Operational_Status_Other Risk

VmaxPort_Operational_Status_Stressed Risk

VmaxPort_Operational_Status_Degraded Risk

VmaxPort_Operational_Status_Error Risk

VmaxPort_Operational_Status_Nonrecoverable_error

VmaxPort_Operational_Status_No_conta ct

Risk

Risk

VmaxPort_Operational_Status_Stopping Risk

Message

Critical

Immediate

Critical

Immediate

Critical

Immediate

> 98

> 95

> 98

> 95

> 98

> 95

Device available capacity is low.

Device available capacity is low.

Storage resource pool available capacity is low.

Storage resource pool available capacity is low.

Thin pool available capacity is low.

Thin pool available capacity is low.

Warning

Info

Warning

Contains predictive failure

Front-end port is having a problem.

Contains Other Front-end port is having a problem.

Contains

Stressed

Front-end port is having a problem.

Warning Contains

Degraded

Immediate Contains Error

Front-end port is having a problem.

Front-end port is having a problem.

Immediate Contains Nonrecoverable error

Front-end port is having a problem.

Warning

Info

Contains No contact

Contains

Stopping

Front-End Port is having a problem.

Front-End Port is having a problem.

List of alerts and notifications

179

Troubleshooting

Resource kind

Symptom

Table 129 VMAX alerts (continued)

Badge

VmaxPort_Operational_Status_Stopped Risk

VmaxPort_Operational_Status_Lost_com munication

Risk

Severity Condition Message

Info Contains

Stopped

Immediate Contains Lost communications

Front-end port is having a problem.

Front-end port is having a problem.

Resource kind

Storage

Pool

Metric

Full (%)

VNX Block alerts

Table 130 VNX Block alerts

Badge Severity Condition

Risk

Efficiency

Subscribed (%) Risk

State Health

Critical

Immediate

Info

Info

Critical

Warning

Critical

Warning

Critical

Message summary

> 90 Capacity used in this storage pool is very high.

> 85

< 5

>100

Offline

Faulted

Capacity used in this storage pool is very high.

Capacity used in this storage pool is low.

This storage pool is oversubscribed.

This storage pool is offline.

This storage pool is faulted.

Expansion Failed

Verification Failed

This storage pool's expansion failed.

Cancel Expansion Failed The cancellation of this storage pool's expansion failed.

The verification of this storage pool failed.

Initialize Failed

Destroy Failed

The initialization of this storage pool failed.

The destruction of this storage pool failed.

Offline and Recovering This storage pool is offline and recovering.

Offline and Recovery

Failed

Offline and Verifying

The recovery of this offline storage pool failed.

This storage pool is offline and verifying.

Offline and Verification

Failed

This storage pool is offline and verification failed.

Faulted and Expanding This storage pool is faulted and expanding.

Faulted and Expansion

Failed

This expansion of this storage pool failed.

180

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Resource kind

FAST

Cache

Metric

State

Table 130 VNX Block alerts (continued)

Badge

Health

Tier Subsribed (%) Risk

Storage

Processor

Busy (%) Risk

Read Cache Hit

Ratio (%)

Dirty Cache

Pages (%)

Efficiency

Efficiency

RAID

Group

Write Cache Hit

Ratio (%)

Efficiency

N/A

Full (%)

State

Health

Risk

Efficiency

Health

Severity

Warning

Info

Info

Critical

Info

Warning

Info

Critical

Info

Warning

Info

Warning

Critical

Critical

Info

Info

Info

Critical

Info

Condition Message summary

Faulted and Cancelling

Expansion

Faulted and Cancel

Expansion Failed

Faulted and Verifying

Faulted and Verification

Failed

Unknown

Enabling

Enabled_Degraded

Disabling

Disabled

Disabled_Faulted

Unknown

> 95

> 90

> 80

< 50

> 95

< 10

> 20

< 25

N/A

> 90

< 5

Invalid

Explicit_Remove

This storage pool is faulted and is cancelling an expansion.

This storage pool is faulted and the cancellation of the expansion failed.

This storage pool is faulted and verifying.

This storage pool is faulted and verification failed.

The status of this storage pool is unknown.

FAST Cache is enabling.

The status of this storage pool is unknown.

FAST Cache is disabling.

FAST Cache is created but disabled.

FAST Cache is faulted.

The state of FAST Cache is unknown.

Consumed capacity (%) of this tier is high.

Storage processor utilization is high.

Storage processor utilization is high.

Storage processor read cache hit ratio is low.

Storage processor dirty cache pages is high.

Storage processor dirty cache pages is high.

Storage processor write cache hit ratio is low.

Storage processor write cache hit ratio is low.

Storage processor could not be reached by CLI.

RAID group capacity used is high.

RAID group capacity used is low.

The status of this RAID group is invalid.

This RAID group is explicit remove.

List of alerts and notifications

181

Troubleshooting

Resource kind

Disk

Metric

Busy (%)

Table 130 VNX Block alerts (continued)

Badge Severity Condition

Risk

Hard Read Error

(count)

Health

Hard Write Error

(count)

Health

Response Time

(ms)

Risk

Info

Info

Critical

Info

Critical

Critical

Immediate

Warning

Info

Critical

Immediate

Warning

Critical

Immediate

Warning

Critical

State Health

Immediate

Warning

Critical

Info

Expanding

Defragmenting

Halted

Busy

> 75

> 10

> 5

> 0

Unknown

> 95

> 90

> 85

> 75

And

Total IO/s > 1

> 75

And

Total IO/s > 1

75 >= x > 50

And

Total IO/s > 1

50 >= x > 25

And

Total IO/s > 1

Removed

Faulted

Unsupported

Unknown

Powering up

Unbound

182

EMC Storage Analytics 3.4 Installation and User Guide

Message summary

This RAID group is expanding.

This RAID group is defragmenting.

This RAID group is halted.

This RAID group is busy.

This RAID group is unknown.

Disk utilization is high.

Disk utilization is high.

Disk has read error.

Disk has read error.

Disk has read error.

Disk has write error.

Disk has write error.

Disk has write error.

Disk average response time (ms) is in range.

N/A

Disk is not idle.

Disk average response time (ms) is in range.

N/A

Disk is not idle.

Disk average response time (ms) is in range.

N/A

Disk is not idle.

This disk is removed.

The disk is faulted.

The disk is unsupported.

The disk is unknown.

The disk is powering up.

The disk is unbound.

Troubleshooting

Resource kind

LUN

Metric

N/A

Service Time

(ms)

Risk

State

Table 130 VNX Block alerts (continued)

Badge Severity Condition

Latency (ms) Risk

Health

Warning

Info

Info

Warning

Info

Warning

Critical

Critical

Immediate

Warning

Critical

Immediate

Warning

Critical

Info

Warning

Rebuilding

Binding

Formatting

Equalizing

Unformatted

Probation

Copying to Hot Spare

N/A

> 25

And

Total IO/s > 1

> 25

And

Total IO/s > 1

> 25

And

Total IO/s > 1

75 >= x > 50

And

Total IO/s > 1

75 >= x > 50

And

Total IO/s > 1

50 >= x > 25

And

Total IO/s > 1

Device Map Corrupt

Faulted

Unsupported

Unknown

Binding

Degraded

Message summary

The disk is rebuilding.

The disk is binding.

The disk is formatting.

The disk is equalizing.

The disk is unformatted.

The disk is in probation

The disk is copying to hot spare.

Disk failure occurred.

LUN service time (ms) is in range.

N/A

LUN is not idle.

LUN service time (ms) is in range.

N/A

LUN is not idle.

LUN service time (ms) is in range.

N/A

LUN is not idle.

LUN total latency (ms) is in range.

N/A

LUN is not idle.

LUN total latency (ms) is in range.

N/A

LUN is not idle.

LUN total latency (ms) is in range.

N/A

LUN is not idle.

This LUN's device map is corrupt.

This LUN is faulted.

This LUN is unsupported.

This LUN is uknown.

This LUN is binding.

This LUN is degraded.

List of alerts and notifications

183

Troubleshooting

Resource kind

Port

Metric

N/A

Fan and

Power

Supply

N/A

Array N/A

Table 130 VNX Block alerts (continued)

Badge

Health

Health

Health

Severity

Info

Info

Critical

Info

Warning

Info

Critical

Warning

Critical

Warning

Condition

N/A

N/A

N/A

N/A

Transitioning

Queued

Offline

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Message summary

This LUN is transitioning.

This LUN is queued.

This LUN is offline.

Link down occurred.

The port is not in use.

Link down occurred.

The port is not in use.

Device (FAN or Power Supply) is having problem. Device state is "empty."

Device (FAN or Power Supply) is having problem. Device state is "unknown."

Device (FAN or Power Supply) is having problem. Device state is "removed."

Device (FAN or Power Supply) is having problem. Device state is "faulted."

Device (FAN or Power Supply) is having problem. Device state is "missing."

Statistics logging is disabled.

Performance data won't be available until it is enabled.

Category

Failures

Background Event

184

LUN

VNX Block notifications

Table 131 VNX Block notifications

Resource kind

Disk

SP Front-end Port

Disk

Message

Disk failure occurred.

Link down occurred.

Disk rebuilding started.

Disk rebuilding completed.

Disk zeroing started. Note: This alert is not available for 1st generation models.

Disk zeroing completed. Note: This alert is not available for 1st generation models.

LUN migration queued.

LUN migration completed.

LUN migration halted.

EMC Storage Analytics 3.4 Installation and User Guide

Category

Configuration

Troubleshooting

FAST Cache

Storage Pool

LUN

Table 131 VNX Block notifications (continued)

Resource kind

EMC Adapter Instance

Storage Pool

Storage Processor

LUN

EMC Adapter Instance

LUN

Storage Pool

Message

LUN migration started.

Fast VP relocation resumed. Note: This alert is not available for 1st generation models.

Fast VP relocation paused. Note: This alert is not available for 1st generation models.

Fast VP relocation started.

Fast VP relocation stopped.

Fast VP relocation completed.

SP boot up.

SP is down. Note: This alert is not available for 1st generation models.

FAST Cache started.

Storage Pool background initialization started.

Storage Pool background initialization completed.

LUN creation started.

LUN creation completed.

Snapshot<snapshot name>creation completed.

SP Write Cache was disabled.

SP Write Cache was enabled. Note: This alert is not available for 1st generation models.

Non-Disruptive upgrading started.

Non-Disruptive upgrading completed.

Deduplication on LUN was disabled. Note: This alert is not available for 1st generation models.

Deduplication on LUN was enabled. Note: This alert is not available for 1st generation models.

Deduplication on Storage Pool paused. Note: This alert is not available for 1st generation models.

Deduplication on Storage Pool resumed. Note: This alert is not available for

1st generation models.

Compression on LUN started.

Compression on LUN completed.

Compression on LUN was turned off.

List of alerts and notifications

185

Troubleshooting

Resource kind

File Pool

Metric

Full (%)

VNX File alerts

Table 132 VNX File alerts

Badge

Risk

Disk Volume Request Comp.

Time (µs)

Efficiency

Risk

Severity

Critical

Immediate

Info

Critical

File System

Data Mover

Service Comp.

Time (µs)

Risk

Full (%) Risk

NFS v2 Read

Response (ms)

Efficiency

Risk

NFS v2 Write

Response (ms)

Risk

NFS v3 Read

Response (ms)

Risk

NFS v3 Write

Response (ms)

Risk

NFS v4 Read

Response (ms)

Risk

Immediate

Warning

Critical

Immediate

Warning

Critical

Immediate

Warning

Critical

Immediate

Warning

Critical

Immediate

Warning

Critical

Immediate

Info

Critical

Immediate

Warning

Critical

Immediate

186

EMC Storage Analytics 3.4 Installation and User Guide

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 15,000

> 10,000

> 25,000

> 15,000

> 10,000

> 90

> 85

< 5

> 75

Condition

> 90

> 85

< 5

> 25,000

Message summary

Capacity consumed of the file pool is high.

Capacity consumed of the file pool is low.

dVol's average request completion time is high.

Capacity consumed of this file system is high.

NFS v2 average read response time is high.

NFS v2 Average write response time is high.

NFS v3 average read response time is high.

NFS v3 average write response time is high.

NFS v4 average read response time is high.

Troubleshooting

Resource kind Metric

Table 132 VNX File alerts (continued)

NFS v4 Write

Response (ms)

Badge

Risk

Severity

Warning

Critical

CIFS SMBv1 Read

Response (ms)

Risk

Immediate

Warning

Critical

CIFS SMBv1 Write

Response (ms)

Risk

Immediate

Warning

Critical

CIFS SMBv2 Read

Response (ms)

Risk

Immediate

Warning

Critical

CIFS SMBv2 Write

Response (ms)

Risk

Immediate

Warning

Critical

State Health

Immediate

Warning

Info

Error

Warning

Condition

> 25

> 75

> 50

> 25

> 75

Message summary

NFS v4 average write response time is high.

CIFS SMB v1 average read response time is high.

> 50

> 25

> 75 CIFS SMB v1 average write response time is high.

> 50

> 25

> 75 CIFS SMB v2 average read response time is high.

> 50

> 25

> 75 CIFS SMB v2 average write response time is high.

> 50

> 25

Offline

Disabled

Out_of_service

Boot_level=0

Data Mover is powered off.

Data Mover will not reboot.

Data Mover cannot provide service. (For example, taken over by its standby)

Data Mover is powered up.

Data Mover is booted to BIOS.

Data Mover is booted to DOS.

List of alerts and notifications

187

Troubleshooting

Resource kind Metric

Table 132 VNX File alerts (continued)

Badge Severity

Info

Error

Condition

Fault/Panic

Online

Slot_empty

Unknown

Hardware misconfigured

Hardware error

Firmware error

Message summary

DART is loaded and initializing.

DART is initialized.

Data Mover is controlled by control station.

Data Mover has faulted.

Data Mover is inserted and has power, but not active or ready.

There is no Data Mover in the slot.

Cannot determine the

Data Mover state.

Data Mover hardware is misconfigured.

Data Mover hardware has error.

Data Mover firmware has error.

Data Mover firmware is updating.

Category

Control Station

Events

Resource kind

Array

Data Mover

VNX File notifications

Table 133 VNX File notifications

EMC Adapter instance

Message

The NAS Command Service daemon is shutting down abnormally.

(MessageID:<ID>)

The NAS Command Service daemon is shutting down abnormally.

(MessageID:<ID>)

The NAS Command Service daemon is shutdown completely.

The NAS Command Service daemon is forced to shutdown. (MessageID:<ID>)

Warm reboot is about to start on this data mover.

Unable to warm reboot this data mover. Cold reboot has been performed.

AC power has been lost. VNX storage system will be powerdown in<value>seconds. (MessageID:<ID>)(timeout_wait)

AC power is restored and back on.

188

EMC Storage Analytics 3.4 Installation and User Guide

Category

Troubleshooting

Table 133 VNX File notifications (continued)

Resource kind

File system

Message

Automatic extension failed. Reason: Internal error. COMMAND:<value>,

ERROR:<value>, STAMP:<value>(MessageID:<ID>)(COMMAND, DM_EVENT_STAMP,

ERROR)

Automatic extension started.

Automatic extension failed. Reason: Filesystem has reached the maximum size.

STAMP:<value>(MessageID: <ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Percentage used could not be determined.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Filesystem size could not be determined.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Available space could not be determined.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Filesystem is not RW mounted.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Insufficient available sapace.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Available pool size could not be determined. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Slice flag could not be determined.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Available space is not sufficient for minimum size extension. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Maximum filesystem size could not be determined. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: High Water Mark (HWM) could not be determined. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Forced automatic extension started.

Automatic extension ended.

Automatic extension ended. The filesystem is now at its maximum size limit.

Forced automatic extension is cancelled. The requested extension size is less than the high water mark (HWM) set for the filesystem.

The filesystem's available storage pool size will be used as the extension size instead of the requested size.

Automatic extension completed.

Forced automatic extension completed. The file system is at the maximum size.

Automatic extension failed. Reason: Volume ID could not be determined.

STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Storage system ID could not be determined. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

List of alerts and notifications

189

Troubleshooting

Category

File system

File pool

File system

File pool

File system

Data Mover

File pool

File system

File pool

File system

File pool

File system

Data Mover

Table 133 VNX File notifications (continued)

Resource kind

EMC Adapter instance

Message

Automatic extension failed. Reason: Filesystem is spread across multiple storage systems. STAMP:<value>(MessageID:<ID>) (DM_EVENT_STAMP)

Automatic extension failed. STAMP:<value>(MessageID: <<ID>)

(DM_EVENT_STAMP)

The JServer is not able to start. VNX File System statistics will be impacted.

(MessageID:<ID>)

Filesystem is using<value>of its<value><value>capacity. (condition, cap_setting, prop_name)

Filesystem has<value>of its<value><value>capacity available. (condition, cap_setting, prop_name)

Storage pool is using<value>of its <value><value>capacity (condition, cap_setting)

Storage pool has<value>of its<value>capacity available. (condition, cap_setting)

Filesystem is using<value>of the maximum allowable file system size (16 TB).

(condition)

Filesystem has<value>of the maximum allowable file system size (16 TB).

(condition)

Filesystem is using<value>of the maximum storage pool capacity available.

(condition)

Filesystem has <value> of the maximum storage pool capacity available.

(condition)

Filesystem will fill its<value><value>capacity on<value>. (cap_setting, prop_name, sdate)

Storage pool will fill its<value>capacity on<value>. (cap_setting, sdate)

Filesystem will reach the 16 TB file system size limit on<value>. (sdate)

Filesystem will fill its storage pool's maximum capacity on<value>. (sdate)

Data Mover is using<value>of its<value>capacity. (stat_value, stat_name)

Storage usage has crossed threshold value<value>and has reached to<value>.

(threshold, pool_usage_percentage)

Storage usage has crossed threshold value<value>and has reached to<value>.

(threshold, pool_usage_percentage)

Filesystem has filled its<value><value>capacity. (cap_setting, prop_name)

Storage pool has filled its<value>capacity. (cap_setting)

Filesystem has almost filled its<value><value>capacity. (cap_setting, prop_name)

Storage pool has almost filled its<value>capacity. (cap_setting)

Filesystem is using<value>of its current inode capacity. (condition)

The SCSI HBA<value>is operating normally. (hbano) Dart Events

190

EMC Storage Analytics 3.4 Installation and User Guide

Category

Troubleshooting

Resource kind

File system

Data Mover

File system

Table 133 VNX File notifications (continued)

EMC Adapter instance

Message

The SCSI HBA<value>has failed. (MessageID:<ID>) (hbano)

The SCSI HBA<value>is inaccessible. (MessageID:<ID>) (hbano)

Filesystem has encountered a critical fault and is being unmounted internally.

(MessageID:<ID>)

Filesystem has encountered a corrupted metadata and filesystem operation is being fenced. (MessageID:<ID>)

Filesystem usage rate<value>% crossed the high water mark threshold<value>%.

Its size will be automatically extended. (currentUsage, usageHWM)

Filesystem is full.

Power Supply A in Data Mover Enclosure was removed.

Power Supply A in Data Mover Enclosure is OK.

Power Supply A in Data Mover Enclosure failed:<value>(MessageID:<ID>) (details)

Power Supply B in Data Mover Enclosure was installed.

Power Supply B in Data Mover Enclosure was removed.

Power Supply B in Data Mover Enclosure is OK.

Power Supply B in Data Mover Enclosure failed:<value>(MessageID:<ID>) (details)

One or more fans in Fan Module 1 in Data Mover Enclosure failed.

(MessageID:<ID>)

One or more fans in Fan Module 2 in Data Mover Enclosure failed.

(MessageID:<ID>)

One or more fans in Fan Module 3 in Data Mover Enclosure failed.

(MessageID:<ID>)

Multiple fans in Data Mover Enclosure failed. (MessageID:<ID>)

All Fan Modules in Data Mover Enclosure are in OK status.

Power Supply A in Data Mover Enclosure is going to shutdown due to over heating. (MessageID:<ID>)

Power Supply B in Data Mover Enclosure is going to shutdown due to over heating. (MessageID:<ID>)

Both Power Supplies in Data Mover Enclosure are going to shutdown due to over heating. (MessageID:<ID>)

Power Supply A in Data Mover Enclosure was installed.

DNS server<value>is not responding. Reason:<value>(MessageID:<ID>)

(serverAddr, reason)

Network device<value>is down. (MessageID:<ID>) (deviceName)

Automatic fsck is started via Data Mover<value>. Filesystem may be corrupted.

(MessageID:<ID>) (DATA_MOVER_NAME)

Manual fsck is started via Data Mover<value>. (DATA_MOVER_NAME)

List of alerts and notifications

191

Troubleshooting

Category

Table 133 VNX File notifications (continued)

Resource kind Message

Automatic fsck succeeded via Data mover<value>. (DATA_MOVER_NAME)

Manual fsck succeeded via Data mover<value>. (DATA_MOVER_NAME)

Automatic fsck failed via Data mover<value>. (DATA_MOVER_NAME)

Manual fsck failed via Data mover<value>. (DATA_MOVER_NAME)

Resource kind Metric

Disk Total Latency

(ms)

VNXe alerts

Table 134 VNXe alerts

Badge

Risk

State Health

Severity

Critical

Immediate

Warning

Critical

Tier

Storage Pool

SP (Storage

Processor)

Full (%)

Full (%)

State

Risk

Risk

Efficiency

Health

CIFS SMBv1

Read Response

(ms)

Risk

CIFS SMBv1

Write Response

(ms)

Risk

Immediate

Warning

Info

Info

Critical

Immediate

Info

Critical

Immediate

Warning

Info

Critical

Immediate

Warning

Critical

Immediate

Warning

192

EMC Storage Analytics 3.4 Installation and User Guide

Condition

> 95

> 90

> 85

< 5

Includes

"critical"

> 75

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 25

Includes

"critical"

Message summary

Disk total latency (ms) is high.

This disk is reporting a problem.

Consumed capacity (%) of this tier is high.

Consumed capacity (%) of this storage pool is high.

Consumed capacity (%) of this storage pool is low.

This storage pool is reporting a problem.

CIFS SMBv1 average read response time(ms) is high.

Troubleshooting

Table 134 VNXe alerts (continued)

Resource kind Metric

CIFS SMBv2

Read Response

(ms)

Badge

Risk

Severity

Critical

CIFS SMBv2

Write Response

(ms)

Risk

NFS v3 Read

Response (ms)

Risk

Immediate

Warning

Critical

Immediate

Warning

Critical

NFS v3 Write

Response (ms)

State

Risk

Health

Immediate

Warning

Critical

Immediate

Warning

Critical

LUN

File System

NAS Server

State

State

State

Health

Health

Health

Immediate

Warning

Info

Critical

Immediate

Warning

Info

Critical

Immediate

Warning

Info

Critical

Immediate

Warning

Info

Condition

> 75

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 25

> 75

> 50

> 25

Includes

"critical"

Message summary

CIFS SMBv2 average read response time(ms) is high.

NFSv3 average read response time

(ms) is high.

This storage processor is reporting a problem.

Condition includes critical

This LUN is reporting a problem.

Condition includes critical

This file system is reporting a problem.

Condition includes critical

This NAS Server is reporting a problem.

List of alerts and notifications

193

Troubleshooting

VPLEX alerts

Table 135 VPLEX alerts

Resource kind Message

Cluster VPLEX cluster is having a problem.

Badge

Health

FC Port

Ethernet Port

Local Device

FC port is having a problem.

Health

Ethernet port is having a problem.

Health

Local device is having a problem.

Health

Recommendation Severity

Check the health state of your VPLEX cluster.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the operational status of your FC port.

Ignore this alert if the operational status is expected.

Critical

Immediate

Warning

Check the operational status of your ethernet port. Ignore this alert if the operational status is expected.

Critical

Immediate

Warning

Check the health state of your local device.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Condition

VPLEX cluster health state is

"major-failure."

VPLEX cluster health state is

"critical-failure."

VPLEX cluster health state is

"unknown."

VPLEX cluster health state is

"minor-failure."

VPLEX cluster health state is

"degraded."

FC port operational status is

"error."

FC port operational status is "lostcommunication."

FC port operational status is

"unknown."

FC port operational status is

"degraded."

FC port operational status is

"stopped."

Ethernet port operational status is

"error."

Ethernet port operational status is

"lost-communication."

Ethernet port operational status is

"unknown."

Ethernet port operational status is

"degraded."

Ethernet port operational status is

"stopped."

Local device health state is "majorfailure."

Local device health state is

"critical-failure."

Local device health state is

"unknown."

Local device health state is "minorfailure."

194

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Resource kind Message

Storage View

Storage Volume Storage volume is having a problem.

Virtual Volume Virtual volume is having a problem.

VPLEX Metro

Distributed

Device

Table 135 VPLEX alerts (continued)

Badge

Storage view is having a problem.

Health

Health

Health

VPLEX metro is having a problem.

Health

Distributed device is having a problem.

Health

Recommendation Severity

Check the operational status of your storage view. Ignore this alert if the operational status is expected.

Critical

Warning

Check the health state of your storage volume. Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the health state of your virtual volume.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the health state of your VPLEX metro.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the health state of your distributed device. Ignore this alert if the health state is expected.

Critical

Condition

Local device health state is

"degraded."

Storage view operational status is

"error."

Storage view operational status is

"degraded."

Storage view operational status is

"stopped."

Storage volume health state is

"critical-failure."

Storage volume health state is

"unknown."

Storage volume health state is

"non-recoverable-error."

Storage volume health state is

"degraded."

Virtual volume health state is

"critical-failure."

Virtual volume health state is

"major-failure."

Virtual volume health state is

"unknown."

Virtual volume health state is

"minor-failure."

Virtual volume health state is

"degraded."

VPLEX metro health state is

"critical-failure."

VPLEX metro health state is

"major-failure."

VPLEX metro health state is

"unknown."

VPLEX metro health state is

"minor-failure."

VPLEX metro health state is

"degraded."

Distributed device health state is

"critical-failure."

Distributed device health state is

"major-failure."

List of alerts and notifications

195

Troubleshooting

Resource kind Message

Engine

Director

Extent

Table 135 VPLEX alerts (continued)

Engine is having a problem.

Director is having a problem.

Extent is having a problem.

Badge

Health

Health

Health

Recommendation Severity

Immediate

Warning

Check the operational status of your engine.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the operational status of your director.

Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Check the health state of your extent. Ignore this alert if the health state is expected.

Critical

Immediate

Warning

Condition

Distributed device health state is

"unknown."

Distributed device health state is

"minor-failure."

Distributed device health state is

"non-recoverable-error."

Distributed device health state is

"degraded."

Engine operational status is

"error."

Engine operational status is "lostcommunication."

Engine operational status is

"unknown."

Engine operational status is

"degraded."

Director operational status is

"critical-failure."

Director operational status is

"major-failure."

Director operational status is

"unknown."

Director operational status is

"minor-failure."

Director operational status is

"degraded."

Extent health state is "criticalfailure."

Extent health state is "unknown."

Extent health state is "nonrecoverable-error."

Extent health state is "degraded."

196

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Resource kind

Cluster

Message

XtremIO cluster is having a problem.

XtremIO alerts

The Wait Cycle is 1 for all these XtremIO alerts.

Table 136 XtremIO alerts based on external events

Badge Recommendation Severity

Health Check the state of your

XtremIO cluster. Ignore this alert if the state is expected.

Critical

Warning

Storage

Controller

Storage controller is having problem.

Health Check the state of your storage controller. Ignore this alert if the state is expected.

Critical

Warning

Condition

XtremIO cluster health state is

"failed."

XtremIO cluster health state is

"degraded."

XtremIO cluster health state is

"partial fault."

Storage controller health state is

"failed."

Storage controller health state is

"degraded."

Storage controller health state is

"partial fault."

Table 137 XtremIO alerts based on metrics

Resource kind Message

Cluster SSD Consumed Capacity

Ratio (%) is high.

Badge

Health

Severity

Warning

Volume

Subscription Ratio is high.

Physical capacity used in the cluster is high.

Risk

Physical capacity used in the cluster is low.

Efficiency

Health Endurance Remaining

(%) is low.

Average Small Reads

(IO/s) is out of normal range.

Health

Average Small Writes

(IO/s) is out of normal range.

Average Unaligned

Reads (IO/s) is out of normal range.

Warning

Condition

Consumed Capacity

Ratio (%) >= 60

Recommendation

1. Free capacity from cluster

2. Extend capacity of cluster

Subscription Ratio >=

5

1. Unsubscribe capacity from cluster

2. Extend capacity of cluster

Consumed capacity >=

90%

Consumed capacity <=

5%

Endurance Remaining

(%) <= 10

Average Small Read

Ratio >= 20

Migrate the volume to another cluster.

Cluster is not fully utilized.

Possible waste.

Replace SSD

Check the status of the volume.

Average Small Write

Ratio >= 20

Average Unaligned

Read Ratio >= 20

Check the status of the volume.

Check the status of the volume.

List of alerts and notifications

197

Troubleshooting

Resource kind Message

Snapshot

Table 137 XtremIO alerts based on metrics (continued)

Average Unaligned

Writes (IO/s) is out of normal range.

Capacity used in the volume is high.

Capacity used in the volume is low.

Average Small Reads

(IO/s) is out of normal range.

Average Small Writes

(IO/s) is out of normal range.

Average Unaligned

Reads (IO/s) is out of normal range.

Average Unaligned

Writes (IO/s) is out of normal range.

Badge

Risk

Efficiency

Health

Severity

Warning

Condition

Average Unaligned

Write Ratio >= 20

Recommendation

Check the status of the volume.

Consumed capacity >=

90%

Consumed capacity <=

5%

Average Small Read

Ratio >= 20

Extend the capacity of the volume.

Volume is not fully utilized.

Possible waste.

Check the status of the snapshot.

Average Small Write

Ratio >= 20

Average Unaligned

Read Ratio >= 20

Average Unaligned

Write Ratio >= 20

Check the status of the snapshot.

Check the status of the snapshot.

Check the status of the snapshot.

RecoverPoint alerts

Cancel cycle and Wait cycle for these alerts is 1.

Table 138 RecoverPoint for Virtual Machines alerts based on message event symptoms

Resource kind Message summary Badge Severity

Consistency group

Copy vPRA

Problem with

RecoverPoint consistency group.

Problem with

RecoverPoint copy.

Problem with vPRA

Health

Health

Health

Event message

Critical

Warning

RecoverPoint consistency group state is unknown.

RecoverPoint consistency group is disabled.

Critical

Warning

RecoverPoint copy state is unknown.

RecoverPoint copy state is disabled.

vRPA status is down.

Critical

Warning vRPA status is removed for maintenance.

Immediate vRPA status is unknown.

Recommendation

Check the status of the consistency group.

Check the status of the copy.

Check the status of the vPRA.

198

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Resource kind vRPA

Table 139 List of RecoverPoint for Virtual Machines alerts based on metrics

Message summary Metric and criteria Badge

Splitter

Problem with vRPA.

Consistency group

RecoverPoint for Virtual

Machines system

Cluster

Consistency group protection window limit has been exceeded.

Lag limit has been exceeded.

Number of splitters is reaching upper limit.

vRPA | CPU Utilization (%)

>95

Health

Consistency group protection window ratio <

1

Link | Lag (%) > 95

RecoverPoint System |

Number of splitters > 30

Number of consistency groups per cluster is reaching upper limit.

Number of vRPAs per cluster is reaching upper limit.

Number of protected virtual machines per cluster is reaching upper limit.

Number of protected volumes per cluster is reaching upper limit.

Number of attached volumes per splitter is reaching upper limit.

RecoverPoint cluster | number of consistency groups > 122

RecoverPoint cluster | number of vRPAs > 8

RecoverPoint cluster | number of protected virtual machines > 285

RecoverPoint cluster | number of protected

VMDKs > 1946

Splitter | number of volumes attached > 3890

Risk

Severity

Warning

Information

Recommendation

Check the status of the vRPA.

Protection window limit has been exceeded.

Lag limit has been exceeded.

Consider adding another

RecoverPoint for Virtual

Machines system.

Consider adding another

RecoverPoint cluster.

Consider adding another

RecoverPoint cluster.

Consider adding another

RecoverPoint cluster.

The maximum number of protected volumes per vRPA cluster is 2K.

The maximum number of attached volumes per splitter is 4K.

List of alerts and notifications

199

Troubleshooting

Launching Unisphere

EMC Storage Analytics provides metrics that enable you to assess the health of monitored resources. If the resource metrics indicate that you need to troubleshoot those resources, EMC Storage Analytics provides a way to launch Unisphere on the array.

The capability to launch Unisphere on the array is available for: l l

VNX Block

VNX File l

VNXe

To launch Unisphere on the array, select the resource and click the Link and Launch icon.

The Link and Launch icon is available on most widgets (hovering over an icon displays a tooltip that describes its function).

Note

This feature requires a fresh installation of the EMC Adapter (not an upgrade). You must select the object to launch Unisphere. Unisphere launch capability does not exist for

VMAX or VPLEX objects.

Installation logs

This topic lists the log files to which errors in the EMC Storage Analytics installation are written.

Errors in the EMC Storage Analytics installation are written to log files in the following directory in vRealize Operations Manager:

/var/log/emc

Log files in this directory follow the naming convention: install-2012-12-11-10:54:19.log.

Use a text editor to view the installation log files.

200

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Log Insight overview

This topic provides an overview of Log Insight and its use with EMC Storage Analytics.

VMware vRealize Log Insight provides log management for VMware environments. Log

Insight includes dashboards for visual display of log information. Content packs extend this capability by providing dashboard views, alerts, and saved queries.

For information on working with Log Insight, refer to the Log Insight documentation: http://pubs.vmware.com/log-insight-25/index.jsp

.

Log Insight configuration

This topic describes important background information about the integration of Log

Insight with EMC Storage Analytics.

You can send the EMC Storage Analytics logs stored on the vRealize Operations Manager virtual machine to the Log Insight instance to facilitate performance analysis and perform root cause analysis of problems.

The adapter logs in vRealize Operations Manager are stored in a subdirectory of the / storage/vcops/log/adapters/EmcAdapter directory. The directory name and the log file are created by concatenating the adapter instance name with the adapter instance ID.

An example of the contents of EmcAdapter follows. Notice that the adapter name parsing changes dots and spaces into underscores. For example, the adapter instance named ESA3.0 Adapter VNX File is converted to ESA3_0_Adapter_VNX_File.

The adapter instance ID of 455633441 is concatenated to create the subdirectory name as well as the log file name.

-rw-r--r-- 1 admin admin 27812 Sep 26 10:37 ./

ESA3_0_Adapter_VNX_File-455633441/

ESA3_0_Adapter_VNX_File-455633441.log

-rw-r--r-- 1 admin admin 1057782 Sep 26 15:51 ./

ESA3_0_VNX_Adapter-1624/ESA3_0_VNX_Adapter-1624.log

-rw-r--r-- 1 admin admin 40712 Sep 23 11:58 ./

ESA3_0_VNX_Adapter-616398625/ESA3_0_VNX_Adapter-616398625.log

-rw-r--r-- 1 admin admin 40712 Sep 23 11:58 ./

ESA3_0_VNX_Adapter-725881978/ESA3_0_VNX_Adapter-725881978.log

-rw-r--r-- 1 admin admin 31268 Sep 10 11:33 ./

ESA_3_0_Adapter-1324885475/ESA_3_0_Adapter-1324885475.log

-rw-r--r-- 1 admin admin 193195 Sep 26 10:48 ./EmcAdapter.log

-rw-r--r-- 1 admin admin 25251 Sep 26 10:48 ./My_VNXe-1024590653/

My_VNXe-1024590653.log

-rw-r--r-- 1 admin admin 25251 Sep 26 10:48 ./My_VNXe-1557931636/

My_VNXe-1557931636.log

-rw-r--r-- 1 admin admin 4853 Sep 26 10:48 ./My_VNXe-1679/

My_VNXe-1679.log

In the vRealize Operations Manager Solution Details view, the corresponding adapter instance names appear as follows: l l

ESA3.0 Adapter VMAX

My VNXe l l

ESA3.0 VNX Adapter

ESA3.0 Adapter VNX File

As seen in the example, multiple instances of each of the adapter types appear because

EMC Storage Analytics creates a new directory and log file for the Test Connection part of discovery as well as for the analytics log file.

Log Insight overview

201

Troubleshooting

My_VNXe-1557931636 and My_VNXe-1024590653 are the Test Connection log locations, and My_VNXe-1679 is the analytics log file.

The Test Connection logs have a null name associated with the adapter ID, for example: id=adapterId[id='1557931636',name='null']'

The same entry type from the analytics log shows: id=adapterId[id='1679',name='My VNXe']'

You can forward any logs of interest to Log Insight, remembering that forwarding logs consumes bandwidth.

Sending logs to Log Insight

This topic lists the steps to set up syslog-ng to send EMC Storage Analytics logs to Log

Insight.

Before you begin

Import the vRealize Operations Manager content pack into Log Insight. This contextaware content pack includes content for supported EMC Adapter instances.

VMware uses syslog-ng for sending logs to Log Insight. Documentation for syslog-ng is available online. The steps that follow represent an example of sending VNX and VMAX logs to Log Insight. Refer to the EMC Storage Analytics Release Notes for the EMC products that support Log Insight.

Procedure

1. Access the syslog-ng.conf directory:

cd /etc/syslog-ng

2. Save a copy of the file:

cp syslog-ng.conf syslog-ng.conf.noli

3. Save another copy to modify:

cp syslog-ng.conf syslog-ng.conf.tmp

4. Edit the temporary (.tmp) file by adding the following to the end of the file:

#LogInsight Log forwarding for ESA <<<<<<<<< comment source esa_logs { internal(); <<<<<<<<<<<<<<< internal syslog-ng events – required.

file("/storage/vcops/log/adapters/EmcAdapter/

ESA3_0_VNX_Adapter-1624/ESA3_0_VNX_Adapter-1624.log"

<<<<<<<<<<<<<<<<<<< path to log file to monitor and forward

follow_freq(1)

<<<<<<<<<<<<<<<<<<<<<<<<<<<<< how often to check file (1 second).

flags(no-parse)

<<<<<<<<<<<<<<<<<<<<<<<<<<<<< don’t do any processing on the file

);

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of first entry – repeat as needed

file("/storage/vcops/log/adapters/EmcAdapter/

ESA3_0_Adapter_VMAX-1134065754/ESA3_0_Adapter_VMAX-1134065754.log"

follow_freq(1)

flags(no-parse));

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of second entry

file("/storage/vcops/log/adapters/EmcAdapter/

202

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

ESA3_0_Adapter_VMAX-1001/ESA3_0_Adapter_VMAX-1001.log"

follow_freq(1)

flags(no-parse));

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of third entry

}; <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of source entry destination loginsight { udp("10.110.44.18" port(514)); };

<<<<<<<<<<< protocol,destination IP and port.

log { source(esa_logs); <<<<<<<<<<<<<<<<<<<< connect the source and destination to start logging

destination(loginsight);

};

5. Copy the .tmp file to the .conf file:

cp syslog-ng.conf syslog-ng.conf.tmp

6. Stop and restart logging:

Note

Use syslog, not syslog-ng, in this command.

service syslog restart

Results

Login to Log Insight to ensure the logs are being sent.

Sending logs to Log Insight

203

Troubleshooting

Error handling and event logging

Errors in the EMC Storage Analytics operation are written to log files available through vRealize Operations Manager.

Error logs are available in the /data/vcops/log directory. This directory contains the vRealize Operations Manager logs.

Adapter logs (including adapters other than the EMC Adapter) are in /data/ vcops/log/adapters.

View logs relating to EMC Storage Analytics operation in the vRealize Operations Manager

GUI. Create and download a support bundle used for troubleshooting.

Viewing error logs

EMC Storage Analytics enables you to view error log files for each adapter instance.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

For example in a web browser, type:

http://<vROPs_ip_address>/vcops-web-ent

2. Select Admin

>

Support. Select the Logs tab.

3. Expand the vCenter Operations Collector folder, then the adapter folder, then the EmcAdapter folder. Log files appear under the EmcAdapter folder.

Double-click a log entry in the log tree.

Entries appear in the Log Content pane.

Creating and downloading a support bundle

Procedure

1. On the Logs tab, click the Create Support Bundle icon.

The bundle encapsulates all necessary logs.

2. Select the bundle name and click the Download Bundle icon.

204

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Log file sizes and rollover counts

This topic describes the default log file size and rollover count for EMC Adapter instances.

Logs for each EMC Adapter instance are in folders under /data/vcops/log/ adapters/EmcAdapter, one folder for each adapter instance. For example, if you have five EMC Adapter instances, a directory (folder) appears for each of them.

Log files in this directory follow this naming convention:

<EMC_adapter_name>-<adapter_instance_ID>.log.<rollover_count>

For example: VNX_File-131.log.9

The log filename begins with the name of the EMC Adapter instance. Filenames beginning with EmcAdapter are common to all connectors.

The number that follows the EMC Adapter instance name is the adapter instance ID, which corresponds to a VMware internal ID.

The last number in the filename indicates the rollover increment. When the default log file size is reached, the system starts a new log file with a new increment. The lowestnumbered increment represents the most recent log. Each rollover is 10 MB (default value, recommended). Ten rollovers (default value) are allowed; the system deletes the oldest log files.

Finding adapter instance IDs

This describes how to find the ID for an EMC Adapter instance.

Procedure

1. In vRealize Operations Manager, select Administration

>

Environment

>

Adapter Types

>

EMC Adapter.

2. In the Internal ID column, you can view the IDs for adapter instances.

Configuring log file sizes and rollover counts

This topic describes how to change the default values for all adapter instances or for a specific adapter instance.

Before you begin

CAUTION

EMC recommends that you not increase the 10 MB default value for the log file size.

Increasing this value makes the log file more difficult to load and process as it grows in size. If more retention is necessary, increase the rollover count instead.

Procedure

1. On the vRealize Operations Manager virtual machine, find and edit the adaptor.properties file:

/usr/lib/vmware-vcops/user/plugins/inbound/emc-vcopsadapter/conf/adapter.properties

2. Locate these EMC Adapter instance properties: com.emc.vcops.adapter.log.size=10MB com.emc.vcops.adapter.log.count=10

Log file sizes and rollover counts

205

Troubleshooting

3. To change the properties for all EMC Adapter instances, edit only the log size or log count values. For example: com.emc.vcops.adapter.log.size=12MB com.emc.vcops.adapter.log.count=15

4. To change the properties for a specific EMC Adapter instance, insert the EMC Adapter instance ID as shown in this example: com.emc.vcops.adapter.356.log.size=8MB com.emc.vcops.adapter.356.log.count=15

Activating configuration changes

This topic describes how to activate changes you made to the log file size or rollover count for an EMC Adapter instance.

Procedure

1. In vRealize Operations Manager, select Environment

>

Environment Overview.

2. In the navigation pane, expand Adapter Kinds, then select EMC Adapter.

3. In the List tab, select a resource from the list and click the Edit Resource icon.

The Resource Management window for the EMC Adapter opens.

4. Click the OK button. No other changes are required.

This step activates the changes you made to the log file size or rollover count for the

EMC Adapter instance.

Verifying configuration changes

This topic describes how to verify the changes you made to the log file size or rollover counts of an EMC Adapter instance.

Procedure

1. Log into vRealize Operations Manager.

2. Change directories to /data/vcops/log/adapters/EmcAdapter.

3. Verify the changes you made to the size of the log files or the number of saved rollover backups.

If you changed: l l l

Only the default properties for log file size and rollover count, all adapter instance logs will reflect the changes

Properties for a specific adapter instance, only the logs for that adapter instance will reflect the changes

Log file size or rollover count to higher values, you will not notice the resulting changes until those thresholds are crossed

206

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Editing the Collection Interval for a resource

From the vRealize Operations Manager user interface, you can edit the Collection Interval for a resource.

The interval time is five minutes by default. Changing this time will affect the frequency of collection times for metrics, but the EMC Adapter will only recognize the change if the resource is the EMC Adapter instance. This is normal vRealize Operations Manager behavior.

Note

For VNXe, the maximum collection interval is 5 minutes.

Instructions on configuring Resource Management settings are provided in the vRealize

Operations Manager online help.

Configuring the thread count for an adapter instance

This topic describes two ways to configure the thread count for an adapter instance.

Only administrative personnel should perform this procedure. Use this procedure to change the thread count for best performance. If the thread count is not specified in adapter.properties, thread count = vCPU count +2. The maximum allowed thread count is 20.

Procedure

1. Access the adapter.properties file. You can find this file at:

/usr/vmware-vcops/user/plugins/inbound/emc-vcops-adapter/ conf/adapter.properties

2. Open and edit the thread count property for all adapter instances or for a specific adapter instance.

l

If you want to edit the thread count property for all adapter instances, change the com.emc.vcops.adapter.threadcount property.

l

If you want to edit the thread count property for a specific adapter instance, insert the adapter instance ID after adapter, for example: com.emc.vcops.adapter.7472.threadcount, and change the property value.

Note

To find an adapter instance ID, refer to Finding adapter instance IDs on page 205

.

3. To activate the property change, restart the adapter instance in the vRealize

Operations Manager.

Editing the Collection Interval for a resource

207

Troubleshooting

Connecting to vRealize Operations Manager by using SSH

This topic describes how to use SSH to login to vRealize Operations Manager as root.

Procedure

1. Open the VM console for the vRealize Operations Manager.

2. Press

Alt-F1

to open the command prompt.

3. Enter

root

for the login and leave the password field blank.

You are prompted for a password.

4. Set the root password.

You will be logged in.

5. Use this command to enable SSH:

service sshd start

You will be able to successfully login as root by using SSH.

208

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

Frequently asked questions

How many nodes are supported per vRealize Operations Manager cluster?

vRealize Operations Manager clusters consist of a master node and data nodes. A total of eight nodes are supported. The master node (required) and up to seven data nodes.

How many resources and metrics are supported per node in vRealize Operations

Manager?

l l l

Small Node - 4vCPU, 16GB Memory - Supports 2,000 objects and 1,000,000 metrics

Medium Node - 8vCPU, 32GB Memory - Supports 6,000 objects and 3,000,000 metrics

Large Node - 16vCPU, 64GB Memory - Supports 10,000 objects and 5,000,000 metrics

How does a trial license work?

A 90-day trial license is provided for each platform that EMC Storage Analytics supports.

The 90-day trial license provides the same features as a permanent license, but after 90 days, the adapter stops collecting data. You can add a permanent license at any time during or after the trial period.

How do health scores work?

Health scores measure how normal a resource is and grades it on a scale of 0-100. A health score of 100 indicates normal behavior while a lower health score indicates that the resource is acting abnormally. The resource may not be in an unhealthy state but there is an abnormality. Health scores are calculated by a proprietary algorithm which account for several factors including thresholds and historical statistics. vRealize

Operations Manager may take up to 30 days to gather enough information to determine what is considered normal in your environment. Until then, you may not see any changes in your health scores.

I deleted a resource. Why does it still appear in the vRealize Operations Manager?

vRealize Operations Manager will not delete any resources automatically because it retains historical statistics and topology information that may be important to the user.

The resource enters an unknown state (blue). To remove the resource, delete it on the

Environment Overview page.

What does the blue question mark in the health score indicate?

The blue question mark indicates that vRealize Operations Manager was unable to poll that resource. It will retry during the next polling interval.

What does it mean when a resource has a health score of 0?

This indicates that the resource is either down or not available.

Why are my EMC Adapter instances marked down after upgrading to the latest version of the EMC Adapter?

EMC Adapter instances require a license to operate. Edit your EMC Adapter instances to add license keys obtained from EMC. Select Environment Overview

>

Configuration

>

Adapter Instances.

I have multiple EMC Adapter instances for my storage systems, and I have added license keys for each of them. Why are they still marked down?

License keys are specific to the model for which the license was purchased. Verify that you are using the correct license key for the adapter instance. After adding a license, click the Test button to test the configuration and validate the license key. If you saved the configuration without performing a test and the license is invalid, the adapter instance

Frequently asked questions

209

Troubleshooting will be marked Resource down. To verify that a valid license exists, select Environment

Overview. The list that appears shows the license status.

How is the detailed view of vCenter resources affected in EMC Storage Analytics?

Any changes in the disk system affects the health of vCenter resources such as virtual machines, but EMC Storage Analytics does not show changes in other subsystems.

Metrics for other subsystems will either show No Data or ?.

Can I see relationships between my vCenter and EMC storage resources?

Yes. Relationships between resources are not affected and you can see a top to bottom view of the virtual and storage infrastructures if the two are connected.

How do I uninstall EMC Storage Analytics?

No uninstall utility exists. However, to remove EMC Storage Analytics objects, remove adapter instances for which the Adapter Kind is

EMC Adapter

(Environment

>

Configuration

>

Adapter Instances). Then delete objects in the Environment Overview for which the Data Source is

EMC

(Environment

>

Environment Overview).

If I test a connection and it fails, how do I know which field is wrong?

Unfortunately, the only field that produces a unique message when it is wrong is the license number field. If any other field is wrong, the only message is that the connection was not successful. To resolve the issue, verify all the other fields are correct. Remove any white spaces after the end of the values.

Can I modify or delete a dashboard?

Yes, the environment can be customized to suit the needs of the user. Rename the dashboard so that it is not overwritten during an upgrade.

Why do some of the boxes appear white in the Overview dashboard?

While the metrics are being gathered for an adapter instance, some of the heat maps in the dashboard may be white. This is normal. Another reason the boxes may appear white is that the adapter itself or an individual resource has been deleted, but the resources remain until they are removed from the Environment Overview page.

Which arrays does EMC Storage Analytics support?

A complete list of the supported models for EMC storage arrays is available in the EMC

Storage Analytics Release Notes .

Will EMC Storage Analytics continue to collect VNX statistics if the primary SP or CS goes down?

Storage Analytics will continue to collect statistics through the secondary Storage

Processor if the primary Storage Processor goes down. EMC Storage Analytics will automatically collect metrics from the secondary Control Station in the event of a Control

Station failover. Note that the credentials on the secondary Control Station must match the credentials on the primary Control Station.

Does the Unisphere Analyzer for VNX need to be running to collect metrics?

No. VNX Block metrics are gathered through naviseccli commands and VNX File metrics are gathered through CLI commands. However, statistics logging must be enabled on each storage processor (SP) on VNX Block, and statistics logging will have a performance impact on the array. No additional services are required for VNX File.

How does the FAST Cache heat map work?

The FAST Cache heatmaps are based on the FAST Cache read and write hit ratios. This heat map will turn red if these ratios are low because that indicates that FAST Cache is not being utilized efficiently. These heat maps will turn green when FAST Cache is servicing a high percentage of I/O.

210

EMC Storage Analytics 3.4 Installation and User Guide

Troubleshooting

I purchased a license for the model of the VNX array that I plan to monitor. When I configure the adapter instance for VNX File, why does an "invalid license" error message appear?

Control Station may not be reporting the correct model or the array. Log into Control

Station and check the array model with the command:

/nas/sbin/model

. Verify that the array model returned matches the model on the Right to Use certificate.

After a Control Station failover, why is the VNX File adapter instance marked down and why does metric collection stop?

The failover may have been successful, but the new Control Station may not be reporting the correct model of the array. This results in a failure to validate the license and all data collection stops. Log into Control Station and check the array model with the command:

/nas/sbin/model

. If the model returned does not match the actual model of the array, Primus case emc261291 in the EMC Knowledgebase provides possible solutions.

The disk utilization metric is not visible for my VNX Block array. Why not?

The disk utilization metric is not supported on VNX arrays running a VNX Block OE earlier than Release 32. Upgrade to VNX Block OE Release 32 or later to see this metric in vRealize Operations Manager.

I am unable to successfully configure an EMC Adapter instance for VNX File when using a user with read-only privileges. Why does this happen?

A user with administrative privileges is required while configuring an EMC Adapter instance for VNX File arrays running an OE earlier than 7.1.56.2. Upgrade to VNX File OE

7.1.56.2 or later to be able to configure an adapter instance using a user with read-only privileges.

The user LUNs on my VNX Block vault drives are not reporting performance metrics. Why not?

Performance metrics are not supported for user LUNs on vault drives. Place user LUNs on drives other than vault drives.

Why is my VMAX array showing as red rather than blue (unknown) just after adding it?

If the array is red, the SMI-S may be down or there may be an incorrect parameter in the adapter configuration. Return to the adapter instance configuration and test the configured instance. If it fails, verify all fields are correct. Determine if the SMI-S server is running and can recognize the VMAX array that you are configuring. If the license and user credentials are correct but connection issues remain, see the EMC Storage Analytics

Release Notes .

All my configured VMAX adapter instances changed to red. Why?

This usually happens when the EMC SMI-S Provider is unavailable or there is a clock synchronization problem. This might be network-related or application-related. The EMC

SMI-S Provider may have reached the connection limit if the network IP is available. Look for errors in the cimom.log file on the SMI-S box similar to:

26-Mar-2013 05:47:59.817 -900-E- WebServer: The webserver hits its connection limit, closing connection.

Follow the steps in the SMI-S Provider Release Notes to increase the count.

A message next to the array will indicate a clock synchronization problem if a problem exists.

Frequently asked questions

211

Troubleshooting

I received the following error when I attempted to modify the VNX Overview dashboard although I have only VMAX arrays. Is this a problem?

Error occurred

An error occurred on the page; please contact support.

Error Message: org.hibernate.exception.SQLGrammerException: could not execute query

No, this is a generic error that VMware produces when you attempt to modify a component you do not have.

212

EMC Storage Analytics 3.4 Installation and User Guide

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

advertisement

Table of contents