Hitachi NAS Platform Best Practices Guide for NFS

Hitachi NAS Platform Best Practices Guide for NFS
Hitachi NAS Platform Best Practices Guide
for NFS with VMware vSphere
By Global Services Engineering
MK-92HNAS028-0
© 2011-2013 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying and recording, or stored in
a database or retrieval system for any purpose without the express written permission
of Hitachi, Ltd.
Hitachi, Ltd., reserves the right to make changes to this document at any time without
notice and assumes no responsibility for its use. This document contains the most
current information available at the time of publication. When new or revised
information becomes available, this entire document will be updated and distributed
to all registered users.
Some of the features described in this document might not be currently available.
Refer to the most recent product announcement for information about feature and
product availability, or contact Hitachi Data Systems Corporation at https://
portal.hds.com.
Notice: Hitachi, Ltd., products and services can be ordered only under the terms and
conditions of the applicable Hitachi Data Systems Corporation agreements. The use of
Hitachi, Ltd., products is governed by the terms of your agreements with Hitachi
Data Systems Corporation.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other
countries. Hitachi Data Systems is a registered trademark and service mark of
Hitachi, Ltd., in the United States and other countries.
Archivas, BlueArc, Dynamic Provisioning, Essential NAS Platform, HiCommand, HiTrack, ShadowImage, Tagmaserve, Tagmasoft, Tagmasolve, Tagmastore, TrueCopy,
Universal Star Network, and Universal Storage Platform are registered trademarks of
Hitachi Data Systems Corporation.
AIX, AS/400, DB2, Domino, DS8000, Enterprise Storage Server, ESCON, FICON,
FlashCopy, IBM, Lotus, OS/390, RS6000, S/390, System z9, System z10, Tivoli, VM/
ESA, z/OS, z9, zSeries, z/VM, z/VSE are registered trademarks and DS6000, MVS, and
z10 are trademarks of International Business Machines Corporation.
All other trademarks, service marks, and company names in this document or website
are properties of their respective owners.
Microsoft product screen shots are reprinted with permission from Microsoft
Corporation.
ii
NFS with VMware vSphere
Notice
Hitachi Data Systems products and services can be ordered only under the terms and conditions of
Hitachi Data Systems’ applicable agreements. The use of Hitachi Data Systems products is governed
by the terms of your agreements with Hitachi Data Systems.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit
(http://www.openssl.org/). Some parts of ADC use open source code from Network Appliance, Inc.
and Traakan, Inc.
Part of the software embedded in this product is gSOAP software. Portions created by gSOAP are
copyright 2001-2009 Robert A. Van Engelen, Genivia Inc. All rights reserved. The software in this
product was in part provided by Genivia Inc. and any express or implied warranties, including, but
not limited to, the implied warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall the author be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited to, procurement of substitute
goods or services; loss of use, data, or profits; or business interruption) however caused and on any
theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise)
arising in any way out of the use of this software, even if advised of the possibility of such damage.
The product described in this guide may be protected by one or more U.S. patents, foreign patents,
or pending applications.
Notices and Disclaimer
The performance data contained herein was obtained in a controlled isolated environment. Actual
results that may be obtained in other operating environments may vary significantly. While Hitachi
Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no
guarantee that the same results can be obtained elsewhere.
All designs, specifications, statements, information and recommendations (collectively, "designs") in
this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its
suppliers disclaim all warranties, including without limitation, the warranty of merchantability,
fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or
trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any
indirect, special, consequential or incidental damages, including without limitation, lost profit or
loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data
Systems Corporation or its suppliers have been advised of the possibility of such damages.
This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data
Systems Corporation may make improvements and/or changes in product and/or programs at any
time without notice. No part of this document may be reproduced or transmitted without written
approval from Hitachi Data Systems Corporation.
NFS with VMware vSphere
iii
Notice of Export Controls
Export of technical data contained in this document may require an export license from the United
States government and/or the government of Japan. Contact the Hitachi Data Systems Legal
Department for any export compliance questions.
Document Revision Level
Revision
Date
Description
MK-92HNAS028-00
March 2013
First publication
MK-92HNAS028-01
December 2013
Revision 1, replaces and supersedes MK-92HNAS028-00
Contact
Hitachi Data Systems
2845 Lafayette Street
Santa Clara, California 95050-2627
https://portal.hds.com
North America: 1-800-446-0744
Contributors
Global Services Engineering would like to recognize and sincerely thank the following contributors
of this document for their expertise, feedback, and suggestions:
• Francisco Salinas
• Paul Morrissey
• Technical Marketing Group
References
•
•
•
•
•
•
•
•
iv
Hitachi NAS Platform File Services System Administration Guide (available from the HNAS
System Management Unit (SMU) GUI under Documentation)
ESX Configuration Guide, ESX 4.0, vCenter Server 4.0:
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_esx_server_config.pdf
VMware Server Configuration Guide, ESX Server 3.0.1 and VirtualCenter 2.0.1:
http://www.vmware.com/pdf/vi3_server_config.pdf
VMware vSphere 4.0 Configuration Maximums, VMware® vSphere 4.0:
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_config_max.pdf
Best Practices for Running VMware vSphere on Network Attached Storage:
http://vmware.com/files/pdf/VMware_NFS_BestPractices_WP_EN.pdf
VMware Knowledge Base Article 1013413, Configuring Flow Control on ESX and ESXi:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013413
VMware Knowledge Base Article 1007909, Definition of the advanced NFS options:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007909
Best Practices for Running VMware vSphere® on Network-Attached Storage (NAS): http://
www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf
NFS with VMware vSphere
Table of Contents
Intended audience .................................................................................................................................................................. 5
About this document ............................................................................................................................................................... 5
Overview ................................................................................................................................................................................ 5
VMware vSphere on NFS ....................................................................................................................................................... 5
NFS and virtual machines ..................................................................................................................................... 5
HDS storage system terminology ........................................................................................................................................... 6
System drives ........................................................................................................................................................ 6
Storage pools ........................................................................................................................................................ 6
File systems .......................................................................................................................................................... 6
HNAS Virtual Volumes (ViVols) ............................................................................................................................ 6
Enterprise virtual server ........................................................................................................................................ 6
Virtual Infrastructor Integrator (V2I) ...................................................................................................................... 7
Storage provisioning ............................................................................................................................................................... 8
NFS volume provisioning ...................................................................................................................................... 8
VMware ESXi NFS configuration .......................................................................................................................... 8
Network connectivity ............................................................................................................................................. 9
Enabling network connectivity and configuring the VMkernel................................................................................... 9
Creating a NAS datastore ...................................................................................................................................... 10
Dynamically growing NFS file systems ................................................................................................................................. 11
Using NAS with vSphere ...................................................................................................................................................... 12
Backup and recovery ............................................................................................................................................................ 12
High availability and replication ............................................................................................................................................ 14
VAAI support ........................................................................................................................................................................ 15
Migrating Virtual Machines to HNAS Datastores using Storage vMotion.............................................................................. 18
vSphere Storage APIs for Storage Awareness (VASA) support ........................................................................................... 19
HNAS best practices ............................................................................................................................................................ 20
General Recommendations.................................................................................................................................... 20
Storage recommendations ..................................................................................................................................... 21
Networking recommendations ................................................................................................................................ 22
Creation of large VMDK files .................................................................................................................................. 23
EVS failover timeout on guest OSes ...................................................................................................................... 24
VMware VMDK thin provisioning ............................................................................................................................ 24
HNAS Deduplication ............................................................................................................................................................. 24
VMware network optimization vSphere in 4.x ....................................................................................................................... 25
Summary .............................................................................................................................................................................. 26
Intended audience
The document is intended for customers, authorized service providers, and Hitachi Data Systems
(HDS) personnel.
About this document
This document covers VMware best practices specific to HDS HNAS storage. This document is
supplemental to the VMware NFS Best Practices document that VMware published in 2013 and
in which HDS, VMware, and two other vendors collaborated on. Click here
http://www.vmware.com/resources/techresources/10096
Please check the HDS support portal or Community (http://community.hds.com/) site for the latest
version of this document.
Overview
HDS has taken an evolutionary step in network storage by architecting the file system and
operating system into a hardware-based appliance that is capable of serving data at high
throughput and IOPS rates. This approach, combined with an advanced feature set and storage
virtualization, makes Hitachi network-attached storage (HNAS) file controllers an ideal storage
solution for VMware environments, serving vSphere and vCloud workloads
Although the hardware architecture is unique, the HNAS 3000 series and 4000 series platforms
provide standards-based iSCSI, CIFS, NFS, FTP, and NDMP protocols for access and backups.
The platform’s advanced feature set includes file and file system snapshots, replication, virtual
servers, virtual volumes, thin provisioning, dynamic volume expansion, transparent data
migration, and a global namespace. These features help to provide the flexibility to meet the
requirements of VMware software products’ functionality. That functionality includes virtual
machine (VM) diskless booting, backup, restore, live migrations with vMotion software, cloning,
and disaster recovery (DR).
VMware vSphere on NFS
Using NAS with the NFS protocol is a fully supported storage option. You can use all the vSphere
software’s supported features and related VMware products, including:
 vMotion
 Storage vMotion
 VMware Consolidated Backup (VCB)
 Storage I/O Control Support
 VAAI Support
 VASA Support
 Site Recovery Manager (SRM)
 VMware Distributed Resource Scheduling (DRS)
 Fault tolerance
The NFS protocol is a mature, stable protocol, and HNAS storage devices provide excellent NFS
performance through a unique, hardware-accelerated platform.
NFS and virtual machines
Virtual machines (VMs) exist as files that are referred to as virtual machine disks (VMDKs). This
document discusses how to place VMDK files on HNAS datastores. NAS datastores appear in
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
5
the vSphere client similar to the way datastores on virtual machine file systems (VMFS) do. Guest
operating systems of all types--Windows, Linux, and so on--can be stored on NAS datastores.
HDS storage system terminology
Some specific terms related to the HDS storage solution are briefly defined here. If you are
already familiar with the configuration of HDS systems, you can skip this section.
System drives
The HNAS system uses RAID technology as a foundation for data protection. The backend
storage subsystem is configured using an appropriate data protection scheme, and the resulting
volumes are presented as logical units for the HNAS system to use. These logical units are
referred to as system drives (SDs) and are combined to create storage pools.
Storage pools
The HNAS system combines SDs to create a virtualized pool of storage (which is also known as
a span in the system CLI). Storage pools contain one or more file systems that an administrator
can share through the NFS protocol. You can dynamically extend the storage pool by adding
additional SDs at any time.
File systems
File systems are provisioned within storage pools, and they can grow, independently of one
another, according to guidelines set by the administrator. File systems contain files, directories,
and virtual volumes.
HNAS Virtual Volumes (ViVols)
HNAS Virtual volumes (ViVol ) are directories in a file system. You can apply a quota to each
virtual volume. Virtual volumes may also be used as self-contained units for replication or data
migration. Virtual volumes make it easy to track space used by a directory hierarchy. When
quotas are set, they can be used to display only the amount of space the administrator wants the
vSphere software to recognize. Virtual volumes and quotas provide an administrator tremendous
provisioning flexibility, as well as simple, zero-impact space accounting. HDS highly recommends
that you use virtual volumes.
Enterprise virtual server
An enterprise virtual server (EVS) encapsulates one or more IP addresses, one or more file
systems, and one or more shares. Today, each HNAS system or cluster can present up to 64
virtual servers. Virtual servers can migrate between physical cluster nodes, similar to how
VMware VMs migrate between ESXi hosts. An EVS appears on the network as a standard NFS
file server.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
6
Figure 1 - Relationship between HNAS storage components
Virtual Infrastructor Integrator (V2I)
Hitachi NAS Virtual Infrastructure Integrator is a management console plug-in and associated
software for VMware vCenter, accessed through the vSphere client, that lets VM administrators
manage their virtual machine’s data management services effectively. With NAS Virtual
Infrastructure Integrator, VMware administrators simplify
the management of virtual machine backup, restore,
cloning, and NFS datastore management. NAS Virtual
Infrastructure Integrator provides the following key
features for VM administrators:




Efficient, scalable, consistent, and managed virtual
machine (VM) backup
Fast, space efficient clones of VMs in 95% less time
and space
Visibility to NFS storage services serving VMware for
effective management
Logically and reliably protects hundreds of VMs with
an unlimited single low-cost license
Specific to backup capabilities, V2I:
 Leverages HNAS storage based
snapshot technology to deliver VM
level snapshots
 The VM admin applies schedule and
retention policies
 Application consistent backups for
assured recovery
 Empowers VM admins with direct
access to scalable VM backup and
recovery process
 Augments traditional backup and provides better VM RTO/RPO
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
7
Storage provisioning
The VMware Virtual Infrastructure product supports iSCSI protocol hardware, software initiator,
FC, and NAS systems. These storage options give you flexibility in how you set up the disk
requirements based on your business needs, cost, performance, and so on.
NFS volume provisioning
There are two methods for creating NFS exports on HNAS servers. You can create an NFS file
system through the HNAS web-based GUI SMU, or through the HNAS CLI. This document does
not cover file system creation. See the Hitachi NAS Platform File Services System Administration
Guide for information about file system and export creation.
VMware ESXi NFS configuration
The ESXi host supports the NFSv3 protocol to enable communication between an NFS client and
NFS server. The client issues requests for information from the server and the server replies with
the result.
The NFS client that is built into the ESXi host enables you to access the NFS server and use NFS
volumes to store virtual machine disks (VMDKs). See the example in Figure 2 - vSphere with
NFS datastores.
Figure 2 - vSphere with NFS datastores
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
8
Network connectivity
Enabling network connectivity and configuring the VMkernel
The IP protocol storage uses the TCP/IP stack as its foundation for communication. The stack
includes iSCSI and NAS for ESXi hosts. A VMkernel uses the TCP/IP protocol stack to handle the
transport of data. See Figure 3 - VMkernel creation in the Add Network Wizard window.
To create a VMkernel
1. In the VMware Virtual Infrastructure client, select an ESXi host.
2. Select the Configuration tab, and then click the Networking link to add networking.
3. Click Next.
4. To choose a VMkernel, select one of the physical network cards.
5. Click Next.
6. Enter VMkernel in the Network Label text box.
7. Click Next.
8. Enter the IP address and the subnet mask.
9. To provide the VMkernel default gateway, click Edit, and enter the gateway address.
10. Click OK, Next, and then Finish.
Note: HDS recommends that the VMkernel network be set up in a private network or with a
unique VLAN ID that provides network isolation.
Figure 3 - VMkernel creation
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
9
Creating a NAS datastore
To create NFS storage
1. In the VMware Virtual Infrastructure client, choose ESXi.
2. Select Configuration > Storage (SCSI, SAN, and NFS) > Add Storage.
3. In the Storage Type dialog box, select the Network File System storage type, and
then click Next.
Figure 4 - Storage Type dialog box
4. In the Locate Network File System dialog box, enter the NAS server name, folder,
and datastore name, and then click Next.
Figure 5 - Add Storage/Locate Network File System dialog box
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
10
Figure 6 - Display NFS volume setup
Important: Ensure that you mount datastores with the same volume label on all vSphere
ESXi hosts within VMware high availability (HA) environments.
Dynamically growing NFS file systems
To increase the storage on the NAS server in a VMware vSphere environment, you can expand
the NFS file system on the HNAS system.
To expand an NFS file system
1. In the SMU GUI, navigate to Home > Storage Management > File System.
2. Select a file system and click details to display the File System Details window.
3. Click expand to display the Expand File System window.
Figure 7 - Expanding the file system
4. In the Expand File System window, enter the new size of file system, and then click OK.
5. Refresh the storage on each ESXi host.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
11
6. From the ESXi host, navigate to Hardware > configuration.
7. Select Storage and then click Refresh to display the new size of NFS file.
The expanded storage is ready for use.
Figure 8 - Displaying expanded NFS file system on an ESXi host
Using NAS with vSphere
The NFS protocol is a mature and well-known protocol. It is widely installed in all sizes of
enterprises, and typically does not require any additional investment. The NFS protocol supports
most of the functionality and VMware products that vSphere software currently supports. That
support includes:




Creating VMs
Booting a VM from an NFS share
Live migration of vSphere VM`s with the VMware vMotion product
Live simultaneous migration of vSphere VMs and their storage with the VMware Storage
vMotion product
 VMware high availability
 Site Recovery Manager (SRM)
Additionally, you can create a Microsoft Windows OS image on a NAS device using NFS
protocols to access the data on it. Another popular implementation is to load all VMs and
application ISO images on a NAS datastore using NFS. These VMs and images can then be
presented as a CD-ROM to virtual machines.
Backup and recovery
Most software and system vendors also provide backup technology as part of the solution they
offer to their customers. VMware offers the VMware Consolidation Backup (VCB) product, and
also provides snapshot and restore functionality. However, SAN-based snapshot and recovery
can be complicated. When recovering block I/O at the LUN level, some recovery processes may
require a restore of the entire LUN.
The HNAS system makes it easier to handle the recovery process because the storage system is
file-based, which makes it easy to identify and recover images. The NFS protocol is not
proprietary, so you can also present a HNAS file to another type of server, such as VMware
Workstation, and power up the VM in the event of a serious host outage.
V2I can leverage standard HNAS file system-level snapshots or HNAS file-level cloning known as
Hitachi NAS File Clone. Leveraging V2I can reduce complexity and significantly decreases
restore times when using Hitachi NAS File Clone. Using Hitachi NAS File Clone allows for nearlyinstant cloning and deployment of new VMs from templates.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
12
Figure 9 – V2I
As you can see in Figure 9, it is intuitive to recover a VM, VMDK, or individual files when using
Virtual V2I plug-in.
Restore options when using vSphere 4.x or when not using the V2I vCenter plug-in:
For environments that do not have NAS Virtual Infrastructure Integrator (V2I) the following
instructions detail other recovery procedures for older versions of vSphere 4.x
To restore a VM from a NFS data store using HDS snapshot restore files:
1. Log on the vShpere host.
2. Power off the VM:
a. Right-click the VM and then, in the VMware Infrastructure client, select Power Off.
Note: If this step does not work in the GUI, use the command line method.
b. From the service console of the vShpere host, issue the following commands:
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
13
vmware-cmd cfg stop
vmware-cmd cfg stop hard
where cfg is the complete path to the configuration file. You can determine the path by
issuing the: vmware-cmd –l command.
c.
To check the state of the VM, enter the vmware-cmd cfg getstate command.
3. On the recovery VM, rename the VMDK file to another name.
a. Navigate to the directory containing the VMDK by issuing the following command:
cd /vmfs/volumes/NFS_datastore/VM_directory
b. Rename the VMDK file by issuing the mv oldname newname command.
4. On snapshot directory, copy the VMDK file to source the VM.
a. Change to the directory containing the VMDK by issuing the following command:
cd /vmfs/volumes/NFS_datastore/.snapshot/VM_directory
b. Copy the snapshot VMDK file to the source:
/vmfs/volumes/NFS_datastore/VM_directory
Note: To access the snapshot directory, select the show snapshots attribute when you
export the NFS file on the HDS storage.
5. Power on VM.
a. Right-click the VM, and then, in the VMware Infrastructure client, select Power On.
b. Issue the vmware-cmd cfg start command.
The HNAS storage provides capabilities for snapshots, recovery, replication, and NDMP backup.
For data recovery, NAS is different from SAN and iSCSI based-block devices, which may require
restoring an entire datastore. The NDMP protocol is a standard data backup protocol for NAS
servers. NDMP backs up NFS files to a local tape device without consuming any ESXi host
resources and network bandwidth.
High availability and replication
High availability and replication technology are important to every organization. HNAS offers high
availability and replication solutions to support business continuity, data protection, and disaster
recovery.
HNAS servers can provide both local and metro clustering. Local clustering protects against NAS
head failure, but not site or storage failure. HNAS Synchronous Disaster Recovery (SyncDR)
Cluster provides storage-based synchronous replication within a data center, campus, or metro
area to provide additional business protection.
HNAS offers replication at both the file and object level to address data protection and business
continuity. HNAS integrates with VMware vCenter Site Recovery Manager™ (SRM) by providing
a Storage Replication Adapter (SRA). The HNAS for SRA is available for vSphere 5.0 and for
vSphere 5.1. To download the SRA Deployment Guide, click here. Go to download section at
my.vmware.com to download the SRA adapter.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
14
VAAI support
VMware API for Array Integration (VAAI) for NAS is an adapter plugin that storage vendors create
to provide storage offload services for VMware vSphere/vCenter environments using NFS
datastores. [This is similar to VAAI for FC block storage; however, VAAI-NAS supports additional
primitives]. Hitachi Data System’s VAAI for NAS adapter takes advantage of capabilities inherent
in the HDS Enterprise NFS/NAS architecture such as hardware file system, VM level hardware
snapshot, and clone operations with File Clone technology.
The VAAI-NAS adapter is leveraged by VMware vCenter for provisioning and operational tasks
such as Clone VM and Deploy VM from Template operations to dramatically speed up those
important operations by offloading to the HNAS platform VAAI-NAS is also used by VMware
Horizon View to offload and speed up desktop provisioning/recompose operations. Finally,
vCloud Director takes advantage of VAAI-NAS, similar to vCenter, to also dramatically speed up
provisioning and VM deployments such as deploying VM/vApp instances from templates. (That is,
operations mentioned in this section now take seconds compared to minutes without VAAI-NAS.)
Beginning with HNAS system release 11.1, HNAS supports calls from a VAAI for HNAS adapter.
It supports all the primitives:




Full File Clone
Fast File Clone
Reserve Space
Extended Stats
The VAAI plug-in is available for download from either VMware or the HDS support portal by
customers.
The VAAI for HNAS plug-in must be installed on each ESXi host that you require to leverage the
VAAI NAS primitives. You can leverage VMware vSphere update manager to update multiple
hosts automatically.
Reminder: HDS recommends Thin Provisioned or ThickLazy VMDK creation when using VAAI for
HNAS for maximum performance and space efficiency benefits. See more details in HNAS
specific recommendations section listed next.
Full File Clone
Full File Clone API enables the ESXi host to offload a cold clone
operation or template deployments to the storage array. Full File
Clone is used when the source and target are on different datastores.
One of the important benefits of using Full File Clone on HNAS is that
HNAS will recognize sparseness (blocks allocated but not used) in
VMDKs, which reduces the time to create Full File Clones. One
important point to note is that this primitive does not support Storage
vMotion. This is true for any NAS vendor’s VAAI for NAS based on vSphere 5.1. Storage vMotion
on NFS datastores will continue to use the VMkernel software data mover as internal HNAS
Engineering tests has shown this to be quite efficient. The primitive can be used only when the
virtual machine is powered off.
Fast File Clone
With the Fast File Clone API, it is possible to offload provisioning and cloning of VMs using linked
clone technology within the HNAS platform. That is, when Fast File Clone API is called by
VMware vCenter or Horizon View or vCloud Director, this feature take advantage of HNAS File
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
15
Clone to provide fast, space-efficient clones of source VM or source template (that is, operations
now take seconds compared to minutes without VAAI-NAS).
VMware vCenter: When a VMware admin does
provisioning tasks within vCenter such as
Clone VM or Deploy VM from Template
operations, it will use VAAI-HNAS to
dramatically speed up those important
operations by offloading to HNAS platform.
Specific to VMware® Horizon View™ 5.2 and unique HDS support for VCAI (View Composer
Array Integration) primitive support, desktop provisioning also call VAAI-HNAS to offload
operation to HNAS platform.
Finally, with the release of vSphere 5.1 and
with VMware vCloud® Director™ 5.1, this
primitive is fully supported for VMware vCloud
vApps when VAAI-NAS is enabled on the
datastore and Fast Provisioning Using Linked
Clones is selected to speed up provisioning
operations.
Reserve Space
Reserve Space is another VAAI for NAS primitive. Without VAAI for NAS, you cannot preallocate
or zero out space for Virtual Machine Disk formats (VMDKs) on NFS without using the CLI.
Historically, the only option available was to build thin VMDKs on NFS or manually create zeroed
VMDKs using the CLI. With the introduction of Reserve Space, you can create thick VMDKs on
NFS datastores. However, VAAI for NAS Reserve Space is not like write same for block. It does
not get the array to do the zeroing on its behalf. When creating a VMDK on a VAAI for NAS array,
selecting Flat sends a Space Reserve NAS VAAI command to the array that guarantees that the
space will be available. This is equivalent to VMware vSphere VMFS (Virtual Machine File
System) lazyzeroedthick, and the blocks are zeroed on first write. However, selecting Flat
preinitialized also sends a Space Reserve NAS VAAI command, but it does ESXi-based zero
writing to the VMDK. This is equivalent to a VMFS Thick Provision Eager Zeroed
(eagerzeroedthick).
Note: VAAI NAS Reserve Space enables you to create virtual disks in lazyzeroedthick or
eagerzeroedthick formats on NFS datastores on arrays that support Reserve Space. However,
when you check the disk type on the Virtual Machine Properties dialog box, the Disk Provisioning
section always shows eagerzeroedthick as the disk format no matter which format you selected
during the disk creation. ESXi does not distinguish between lazy zeroed and eager zeroed virtual
disks on NFS datastores.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
16
Figure 10 - Provisioning options
HDS recommends Thin Provisioned VMDK or ThickLazy VMDK creation on HNAS. With HNAS
hardware-accelerated file system and primary deduplication and our allocation mechanism, these
provide the optimum provisioning performance, security, and space efficiency and mitigate the
need to leverage ThickEager VMDKs where previously required. ThickEager VMDKs can
continue to be used if necessary. ThickLazy VMDKs will report as ThinProvisioned VMDKs in
vCenter since we introduced space efficiency features for our VAAI implementation. Competitors
might report ThickEager VMDKs for ThickLazy VMDKs
Extended Stats (NAS)
This enables you to query how much space a VMDK actually consumed on an NFS datastore.
For example, you might create a 100 GB thin VMDK, but actually consume only 25 GB of space
on the array. This was an issue vSphere previously did not address. This was not a necessary
feature for VMFS because vSphere understands VMFS, but was needed for NFS. Thin
provisioning primitives were introduced with vSphere 5.0, including features such as the raising of
an alarm when a thin provisioning volume reached 75 percent of capacity at the back end, thin
provisioning-Stun and, the UNMAP primitive. However these thin provisioning primitives are for
SCSI only. The VAAI space-threshold alarm is supported only on SCSI datastores. Similarly, the
VAAI thin provisioning-Stun was introduced to detect out-of-space conditions on SCSI LUNs.
However, for NAS datastores, NFS servers can already return an out-of-space error that should
be propagated up the stack. This should induce a virtual machine stun similar to VAAI thin
provisioning. This operation does not need the VAAI NAS plug-in, and should work on all NFS
datastores, whether or not the hosts have VAAI enabled. vSphere Storage DRS also leverages
this event. After the alarm is triggered, vSphere Storage DRS no longer considers those
datastores as destinations for initial placement or ongoing load-balancing of virtual machines.
Finally, the UNMAP primitive is also for SCSI. The reclaiming of dead space is not an issue on
NAS arrays.
The next screenshot shows how to verify that an HNAS NFS datastore is configured for hardware
acceleration. The hardware acceleration is provided by the VAAI adapter for HNAS. It also shows
the storage capabilities that are recognized for that particular datastore.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
17
Figure 11 - Hardware acceleration
Migrating Virtual Machines to HNAS Datastores using
Storage vMotion
When using HNAS NFSv3 storage as VMware datastores, there are several options to select from for
virtual disk format. These options are only available when the HNAS VAAI plugin is installed on the ESXi
Server. The best practice for HNAS is to use Thin Provisioned VMDKs or Thick Lazy VMDKs instead of
Thick Eager VMDKs. By using one of the two mentioned options, the capacity utilization after migration
to HNAS is reduced.
During the Storage vMotion workflow, ensure Thin Provision is selected as the virtual disk format. See
Figure 12.
Figure 12 - Post Storage vMotion Confirmation
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
18
To confirm that the VM was migrated using Thin Provisioned format, use the stat command
and/or vmkfstools from an ESXi host for the VMDK(s) in question. From the vSphere CLI,
navigate to /vmfs/volume/… directory and run either command. See below for some
examples:
VMDK after Storage vMotion with Thin Provisioning
/vmfs/volumes/46a2299b-2f7b7351/vm-eagerzeroblock # stat vm-eagerzeroblock-flat.vmdk
File: vm-eagerzeroblock-flat.vmdk
Size: 34359738368
Blocks: 42055168
IO Block: 131072 regular file …
/vmfs/volumes/46a2299b-2f7b7351/vm-eagerzeroblock # vmkfstools --extendedstat vmeagerzeroblock.vmdk
Capacity bytes: 34359738368
Used bytes: 21532311552,
Unshared bytes: 21532311552
VMDK after Storage vMotion with Thick Eager Zero.
vmfs/volumes/d1396478-da139ba1/vm-eagerzerothick2 # stat vm-eagerzerothick2-flat.vmdk
File: vm-eagerzerothick2-flat.vmdk
Size: 34359738368
Blocks: 67108864
IO Block: 131072 regular file …
/vmfs/volumes/d1396478-da139ba1/vm-eagerzerothick2 # vmkfstools --extendedstat vmeagerzerothick2.vmdk
Capacity bytes: 34359738368
Used bytes: 34359738368,
Unshared bytes: 34359738368
With Thin Provisioning, in the example above, the system consumes 42055168 blocks.
Displaying a used space of 21G compared to the original 34G. With Thick Eager, 67108864
blocks will be consumed and the system will report 34G used space [all those zeros].
vSphere Storage APIs for Storage Awareness (VASA)
support
VMware vSphere 5.0 introduced an API for storage vendors to provide vCenter with management
information about the underlying storage capabilities to aid with administration.
In the next screenshot, you can see how a storage profile “Hitachi High-Performance Storage”
can be assigned per VM to check for compliance. The storage profile consists of HNAS
capabilities exposed by the VASA provider for HNAS.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
19
Figure 13 - VASA support
The next screenshot shows assigning a VM storage profile to the storage capabilities exposed by
the HNAS VASA provider.
Figure 14 - VM Storage Profile
HNAS best practices
You can configure HNAS file systems and their underlying storage in a variety of different ways.
To achieve the best performance, follow these recommendations for configuring HNAS in a
VMware vSphere environment.
General Recommendations

File system configuration
­
In general a 4 KB file system block size is recommended. 32 KB can be used in
instances where all VMs on a specific HNAS file system perform large block requests.
­
Set cache-bias to large (cache-bias --large-files). This requires a reboot and will
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
20
optimize HNAS metadata cache for large VMDK files.

­
Disable shortname generation and access time maintenance (shortname –g off,
fs-accessed-time --file-system <file_system> off).
­
Disable the quick start option for HNAS read ahead when VM IO profiles are primarily
random on a HNAS file system. If the IO profile is sequential, leave the default read
options enabled.


o
Sequential: read-ahead –file-system <file_system> --default
Do not export the root of file system, instead create a directory to be exported.
File system utilization
­

Random: read-ahead –file-system <file_system> --quickstart disable
NFS exports
­

o
Maintain at least 10% free space in each file system utilized by ESXi hosts.
Storage pools
­
Do not mix disk types in the same storage pool
­
Limit ownership of all file systems that are created on a storage pool to one EVS.
­
Configure a minimum of four (4) System Drives (SD) in a storage pool.
­
Configure one (1) LU\LDEV per RAID group consuming all space (if possible).
OS Alignment
­
Windows 2008 should align automatically
­
Linux: set partitions to start at block 104 using fdisk
HNAS Tiered File System (TFS)
­
This feature enhances heavy concurrent random IO performance.
­
Solid-State Drives (SSD) are strongly recommended for Tier 0.
­
Tier 0 can also leverage SAS drives, however, when using SAS, RAID10 is required.
­
On average Tier 0 should represent 5% of the capacity of the file system.
Storage recommendations
HDS recommends the following storage configuration for VMware vSphere environments:


Set RAID stripe chunk size as 64 KB for Hitachi Unified Storage (HUS) systems, and Hitachi
Adaptable Modular Storage (AMS) Logical Units (LU)
For recommended RAID levels, see the following table:
Workload\Storage Array
HUS 1x0
HUS VM and VSP
Random with heavy Read
RAID5 4+1
RAID5 3+1
Sequential with heavy Read
RAID5 8+1
RAID5 7+1
Heavy Write
RAID10 2+2 or 4+4
RAID10 2+2 or 4+4
Note: Use RAID6 with Hitachi Dynamic Provisioning (HDP) or Hitachi Dynamic Tiering
(HDT).
When using HDP or HDT, HDS recommends that you:

Dedicate an HDP pool solely to the HNAS system for best performance.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
21

Maintain a ratio of 1 Hitachi Dynamic Provisioning volume for each RAID group (1-to-1) that
is used in the Dynamic Provisioning pool.
‒ When using SAS 10K drives, it is possible to have a ratio of 2-to-1.
‒

When using SAS 7.2K drives, keep the ratio at 1-to-1.
Use Hitachi Dynamic Provisioning in full capacity mode with accelerated wide stripping
enabled (HUS 1x0 Series).
Networking recommendations
Hitachi recommends the following network configurations for VMware vSphere environments:
 10 Gigabit Ethernet
­

10 GbE dedicated connections for the Ethernet Storage Network between HNAS and
hypervisor and Layer 2 switches
Session Sharing / Multipathing with IP Addresses and HNAS Enterprise Virtual Server (EVS)
To achieve an equivalent to multipathing in VMware/ NFS environment (that is, effective load
sharing and multiple sessions across multiple available physical connections/adapters),
-
-
-
HDS recommends the following:
When HNAS is deployed in a clustered scenario, dedicated EVSs should be deployed on each
node, and multiple IP addresses should be assigned to each EVS. HNAS NFS datastores should be
provisioned in vCenter in a round-robin fashion against the EVSs multiple IP addresses to ensure
parallelism on the HNAS and increased IP connectivity from the hypervisor, resulting in higher
overall throughput on the Storage Network. see Figure 14
IP hashing configured on the ESXi NFS network adapters to spread the IP load across the multiple
10 GbE links. (IP hashing is used within the VMware vSwitch for active/active interface
utilization on the hypervisor.)
A separate option in addition to IP hashing is load based teaming (LBT). See the VMware NFS
paper referenced in the introductory section for more details.
Figure 15 - Multiple IP per EVS

Jumbo frames
­
Each infrastructure device throughout (end-to-end) the Storage Network is configured to
pass 9000 MTU jumbo frames
1. On the HNAS server, log in to the console, and issue the following commands:
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
22
a. To update the configuration to set the MTU for TCP and non-TCP packets to
9000, issue the ipadv -p <agg_number> -m 9000 -n 9000 command.
2. To check that jumbo frames are being sent and received, and to see the traffic flow in
the console, issue the command:
b. nim nim-mib | grep JUMBO

Minimizing latency
­


For vSphere 5.0, all Ethernet Storage Network components should reside on the same
subnet in HDS HNAS configurations (hypervisor vmkernels, HNAS EVS, and so forth).
vSphere 5.1 and routed NFS are still being evaluated for best practice consideration.
Link aggregation
­
HDS HNAS interfaces are configured as vLAGs to the upstream switches (for example,
Brocade VDX or Cisco Nexus) to ensure connection load-balancing as well as
availability.
­
Do not use the HNAS round robin port level load-balancing – instead, use the Normal
default setting.
Miscellaneous network features
­
Flow Control – recommended when vSphere servers are utilizing 1 GbE to connect to
HNAS 10 GbE datastores.
Note: Flow control needs to be enabled end-to-end.
­
Spanning Tree – Unnecessary when utilizing VCS fabric with Brocade VDX switches.
Creation of large VMDK files
In some instances, the HNAS system may require changes to the timeout in ESXi when creating
large VMDK’s when using HNAS system releases lower than 11.2. When creating a large VMDK,
while the file is sparse, HNAS will still create the necessary file system metadata for the VMDK.
In some cases, ESXi may timeout. The recommendation is to change the timeout to 30 seconds.
This will allow for creation of VMDKs up to 2 TB in size. To increase the timeout, see the following
commands:
ESXi 4.1 – Use the CLI command: esxcfg-advcfg -s 30 /NFS/SetattrRPCTimeout
ESXi 5.0 – Use the CLI command: esxcli system settings advanced set -o
/NFS/SetAttrRPCTimeout -i 30
Starting in HNAS system release 11.2, a new setting is available to speed up creation of large
VMDK’s. To enable this option, run the following from the HNAS CLI:
set allow-sparse-metadata-creation true
Note: A defect was discovered with this option. It is recommended that you not enable this option
until further notice.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
23
EVS failover timeout on guest OSes
An EVS on an HNAS node will failover to the other HNAS node in the cluster. This operation will
not normally cause the ESXi NFS datastore to timeout; however, the Guest OS timeouts should
be set to match the default ESXi timeouts.
To handle the NFS timeout, set the operating system timeout for Windows servers to match the
125-second maximum that was set for the datastore by default. You must set the timeout for all
VMs.
Note: The VMware Tools for Linux 2.6 automatically adjusts the Linux timeouts. See
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&exter
nalId=1009465)
To set the timeout
1. Back up your Windows registry.
2. In the registry, go to HKEY_LOCAL_MACHINE/System/CurrentControlSet/Services/Disk.
3. Right-click and select Edit/Add value.
4. Set the value name to TimeOutValue.
5. Set the data type to REG_DWORD.
6. Set the data to 120 decimal.
7. Reboot the VM.
VMware VMDK thin provisioning
When a VMDK is created on NFS storage, the file is thin provisioned by default. On the HNAS file
system, a thin file, also called a sparse file, is created. To prevent over-allocation, HNAS reports
the full size of the file, even though it is only using a fraction of the space. HNAS also prevents
the creation of sparse files that are larger than the file system size. To disable this behavior and
have HNAS report the thin size of VMDKs, issue the following HNAS command:
true-sparse-files –-enable
Note: As of HNAS system release 11.1, true sparse files are set to on by default. Also, when
replicating VM’s, make sure to enable true-sparse-files on the destination HNAS system.
HNAS Deduplication
The HNAS system supports primary deduplication of data on the HNAS file systems. VM type
files, such as VMware VMDK files, are ideal for deduplication. As stated earlier, the HNAS file
system supports VMDK thin provisioning. HNAS does not deduplicate the sparse portion of the
VMDK because it is not using any space.
For example:


A file system contains 100 thin VMDKs of 10 GB, each deployed from the same template.
­
Total space utilization reported by the HNAS system (by default): 1000 GBs.
­
Each VMDK contains 1 GB of actual data; the rest is sparse.
­
Each VMDK contains identical data.
After deduplication, the HNAS system would report the following:
­
Deduplicated: 99 GB.
­
Total utilization: 901 GB.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
24

With sparse file support enabled, the HNAS system would report the following total space
utilization:
­
Before deduplication: 10 GB.
­
After deduplication: 1 GB.
The HNAS systems use of block-level deduplication. Identical data may exist, but if the data is not
aligned on the same block boundaries the HNAS system cannot deduplicate the data.
For example, in the following diagram, blocks 1 and 4 would be deduplicated, but block 3 would
not be deduplicated, even though block 3 contains similar data.
The HNAS system’s file clone feature adds another consideration for deduplication of VMware
VMDKs. File clones allow for more efficient space utilization. Deduplication treats file clones
differently from regular user data. Deduplication only processes diverged blocks. As illustrated in
the following diagram, when a file clone of FileA is created, FileA and Copy of FileA would contain
diverged data.
Note: HNAS file systems which have syslock enabled cannot be deduplicated unless syslock is
disabled.
For more information on deduplication, see the HNAS Deduplication Best Practices Guide
VMware network optimization vSphere in 4.x
Advanced setting to handle NFS protocol timeout in 4.x (not applicable to
vSphere 5.x)
The NFS protocol heartbeats are used to determine whether or not an NFS volume is still
available. You can use the ESXi advanced setting to manage NFS protocol timeouts. When an
NFS failover occurs, HDS storage may take longer to timeout than the VMware default timeout
setting. HDS recommends increasing the default value to 120 seconds to prevent VMs from being
disconnected.
The following variables are tied to the 120-second NFS timeout that HDS recommends:



NFS.HeartbeatFrequency = 12 seconds
NFS.HeartbeatTimeout = 5 seconds
NFS.HeartbeatMaxFailures = 10 seconds
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
25
The NFS protocol heartbeat feature functions in this manner:





Every NFS.HeartbeatFrequency (or 12 seconds), the ESXi host checks to see that the NFS
datastore is reachable.
The heartbeats expire after NFS.HeartbeatTimeout (or 5 seconds), after which another
heartbeat is sent.
If NFS.HeartbeatMaxFailures (or 10) heartbeats fail in a row, the datastore is marked as
unavailable and the VMs become unresponsive. This means that the NFS data store can be
unavailable for a maximum of 125 seconds before being marked unavailable.
When an NFS timeout occurs, the VM recognizes a non-responsive SCSI disk on the vSCSI
adapter. The disk timeout is the length of time that the guest OS will be affected due to the
disk becoming non-responsive.
See the section titled “EVS Failover Timeout on Guest OSes” on page 25 for details on
adjusting guest OS settings.
Summary
The NFS protocol is a mature and well-known protocol that is simple and well understood. You
can use the existing data network without additional capital expenditure. The network can be
either an existing network with VLANs, or a private network and VLANs. Using the private
network or VLANs provides more secure data access and better performance.
The VMware vSphere software is capable of using a NAS/NFS datastore to create VMs. The
VMDK files are stored in the NFS datastore that is exported from the NAS server.
The NAS system and NFS protocol also take advantage of key VMware features and products,
including VMware HA, DR, vMotion, and Storage vMotion. Using vMotion, you can perform live
migration between servers, as well as hardware maintenance, without scheduling any downtime.
All of these features are supported by HDS storage systems. The combination of HNAS NFS
simplicity and HDS resilient SAN storage is a powerful combination for Ethernet/NFS based
VMware environment use cases.
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
26
Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere
Hitachi Data Systems
Corporate Headquarters
2845 Lafayette Street
Santa Clara, California 95050-2639
U.S.A.
www.hds.com
Regional Contact Information
Americas
+1 408 970 1000
info@hds.com
Europe, Middle East, and Africa
+44 (0)1753 618000
info.emea@hds.com
Asia Pacific
+852 3189 7900
hds.marketing.apac@hds.com
MK-92HNAS028-01
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising