VersaStack Solution for File Storage Using IBM

VersaStack Solution for File Storage Using IBM
Front cover
VersaStack Solution for File
Storage Using IBM Storwize
V5030 and Windows Server 2016
Warren Hawkins
Adam Reid
Redpaper
International Technical Support Organization
VersaStack Solution for File Storage Using IBM
Storwize V5030 and Windows Server 2016
August 2017
REDP-5442-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
First Edition (August 2017)
This edition applies to the components listed in Table 1-1, “Components and software versions” on page 2.
This document was created or updated on August 7, 2017.
© Copyright International Business Machines Corporation 2017. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Implementing Windows File Services on VersaStack . . . . . . . . . . . . . . . . . . 1
1.1 Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 VersaStack overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Compute options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Windows Server 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 High availability provided by failover clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 File sharing methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.3 Feature comparison and suitability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.4 Comparison of Storwize V7000 Unified to Windows Server 2016 . . . . . . . . . . . . . 7
1.4.5 Storage features of Windows Server 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 IBM Storwize V5000 Gen2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 IBM Spectrum Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.2 IBM Storwize V5000 Gen2 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.3 Storage scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5.4 Drive compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Chapter 2. Configuring your VersaStack for Windows File Services . . . . . . . . . . . . . .
2.1 Configuring IBM Storwize V5030 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Allocating storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3 Microsoft Offloaded Data Transfer (ODX). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4 Creating host clusters and hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.5 Creating SAN boot volumes and mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.6 I/O throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.7 Adding a storage expansion enclosure for scalability . . . . . . . . . . . . . . . . . . . . . .
2.2 Configuring Cisco UCS compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Changes and additions to the VersaStack V5030 deployment guide . . . . . . . . . .
2.3 Configuring Windows Server 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Checking that the VIC drivers are installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 Adding roles and features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.3 Installing the IBM Subsystem Device Driver Device Specific Module for
Windows Server 2016. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.4 Configuring failover cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.5 Creating shared volumes for File Services on Storwize V5000 . . . . . . . . . . . . . .
2.3.6 Preparing Volumes for use in file shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.7 Creating a File Server for general use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
© Copyright IBM Corp. 2017. All rights reserved.
23
24
24
24
29
30
35
41
41
43
43
53
53
54
56
57
60
65
79
iii
iv
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS”
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
© Copyright IBM Corp. 2017. All rights reserved.
v
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright
and trademark information” at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
Easy Tier®
FlashCopy®
HyperSwap®
IBM®
IBM FlashSystem®
IBM Spectrum™
IBM Spectrum Scale™
IBM Spectrum Virtualize™
Real-time Compression™
Redbooks®
Redpaper™
Redbooks (logo)
Storwize®
System Storage®
®
The following terms are trademarks of other companies:
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
vi
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Preface
This IBM® Redpaper™ publication can help you understand how to implement VersaStack
with IBM Storwize® V5030 and Microsoft Windows Server 2016 for file storage. This is one
of two solutions, powered by IBM Spectrum™ Virtualize, that will replace the Storwize V7000
Unified.
Note: To prevent the duplication of effort, be sure to read and become familiar with the
VersaStack deployment guide (VersaStack with Cisco UCS Mini and IBM Storwize V5000
Gen2, Direct Attached SAN Storage), which is available from the Cisco website:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/versastack_
n5k_2ndgen_mini.html
Authors
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Warren Hawkins has a background in infrastructure support in
predominantly Windows and VMware environments. He also
has 15 years of experience working in second-line and
third-line support in both public and private sector
organizations. Since joining IBM in 2013, Warren has played a
crucial part in customer engagements and, using his field
experience, has established himself as the Test Lead for the
IBM Spectrum Virtualize™ product family, focusing on
clustered host environments.
Adam Reid has more than 17 years of computer engineering
experience. Focused more recently on IBM Spectrum
Virtualize, he’s been deeply involved with VMware and the
testing and configuration of virtualized environments pivotal to
the future of software-defined storage. Adam has designed,
tested, and validated systems to meet the demands of a wide
range of mid-range and enterprise environments.
Since writing this paper, Adam has left IBM.
© Copyright IBM Corp. 2017. All rights reserved.
vii
Jon Tate is a Project Manager for IBM System Storage® SAN
Solutions at the International Technical Support Organization
(ITSO), San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level
2/3 support for IBM mainframe storage products. Jon has over
30 years of experience in storage software and management,
services, and support, and he is an IBM Certified IT Specialist,
an IBM SAN Certified Specialist, and Project Management
Professional (PMP), certified. He is also the UK Chairman of
the Storage Networking Industry Association. This project was
lead by Jon, IBM Redbooks Project Leader.
Thanks to the following people for their contributions to this project:
John Clifton
Drew Sens
IBM Systems
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks® publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
viii
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Stay connected to IBM Redbooks
򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface
ix
x
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
1
Chapter 1.
Implementing Windows File
Services on VersaStack
Over the past few years, several trends have emerged: All-flash array (AFA) vendors are
expanding to the midrange; software-defined storage is becoming mainstream; and storage
vendors are embracing hybrid cloud.
IBM is continually optimizing its comprehensive storage portfolio by investing in the workloads
of tomorrow. This means that IBM is focusing on the key growth initiatives of all-flash,
software-defined storage, object storage, converged infrastructure, and cognitive computing.
With these focuses, IBM will be moving from the IBM Storwize V7000 Unified toward two
solutions that are tailored to the use cases of IBM clients. These solutions, powered by IBM
Spectrum Virtualize, will replace IBM Storwize V7000 Unified. Using the power of the
VersaStack integrated infrastructure provides increased scalability over Storwize V7000
Unified and all-flash array options.
These new solutions are solid bases from which to tailor and grow depending on specific
needs of clients. One solution is focused on interaction with the IBM file and object storage
platform (IBM Spectrum Scale™). Another solution is focused on the strengths of the IBM
Storwize controller and the ease of using Windows Server 2016. This second solution is the
focus of this Redpaper publication.
Figure 1-1 on page 2 provides a high-level overview of the key components that enable the
file services on the VersaStack file offering.
© Copyright IBM Corp. 2017. All rights reserved.
1
Figure 1-1 High-level overview of the services on the VersaStack file offering
1.1 Hardware
Figure 1-1 illustrates the use case of directly attaching an IBM Storwize V5030 storage
controller to the Cisco UCS Mini, removing the requirement to have a dedicated Fibre
Channel (FC) switching environment. The Cisco UCS Mini Chassis (5108) can house up to
eight half-width Cisco UCS B-Series B200 M4 Blade Servers.
Scalability can be achieved by adding storage capacity (Storwize V5000 disk expansion
enclosures) to the existing Storwize V5030 storage controller, adding a second Storwize
V5030 storage controller or by adding a second UCS Chassis and/or UCS C-Series servers.
Table 1-1 details the components and software versions used in the VersaStack system
documented in this paper.
Table 1-1 Components and software versions
Layer
Device
Version
Details
Compute
Cisco UCS fabric interconnect
3.1(2e)
Embedded management
Cisco UCS B200 M4
3.1(22a)
Software bundle release
Cisco fNIC
3.0.0.7
FCoE driver for Cisco VIC
Cisco eNIC
4.0.0.2
Ethernet driver for Cisco VIC
IBM Storwize V5030
7.8.1.0
Spectrum Virtualize GUI
Storage
2
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
1.2 Software
In addition to the embedded software listed in Table 1-1 on page 2, you also install Microsoft
Windows Server 2016 onto two Cisco UCS B200 M4 blades to create a Windows failover
cluster, providing a highly available, high-performing, and scalable file services solution.
Installing Windows Server 2016 is outside the scope of this book.
1.3 VersaStack overview
The VersaStack solution combines the innovation of Cisco UCS Integrated Infrastructure with
the award winning, feature rich storage systems powered by IBM Spectrum Virtualize. The
VersaStack solution is thoroughly validated and verified by the VersaStack solution engineers,
who comprehensively test and document the validated design, culminating in the Cisco
Validated Design (CVD).
CVDs provide the foundation for systems design, based on common use cases or current
engineering system priorities. They incorporate a broad set of technologies, features, and
applications to address customer needs.
Information: For more information about VersaStack Cisco Validated Designs, see the
VersaStack Solution web page of the Cisco website.
This paper is based on the VersaStack with Cisco UCS Mini and IBM V5000 2nd Generation
CVD. The assumption is that you are familiar with the VersaStack solution design. The
configuration topics in this paper use the VersaStack V5000 CVD as the reference design. As
such, this paper does not discuss detailed VersaStack configurations that might be covered in
the VersaStack V5000 CVD.
Information: You can download the VersaStack with Cisco UCS Mini and IBM V5000 2nd
Generation CVD from the CVDs tab section of the VersaStack Solution web page.
1.3.1 Scalability
VersaStack is designed to be a highly scalable solution. With Cisco UCS Mini and
direct-attached IBM Storwize V5000 2nd Generation, a maximum of 16 half-width blade
servers within two Cisco UCS 5108 chassis are supported.
Figure 1-2 on page 4 shows the maximum compute and storage scale-up possible with the
VersaStack V5000, using a direct-attached Storwize V5030 controller. The preferred practice
dictates provisioning two ports, one from each node on the Storwize V5030 controller to two
of the four ports on each Cisco UCS 6324 (Mini) Fabric Interconnects. The remaining two
ports on the Fabric Interconnects are provisioned for Ethernet traffic. To continue to scale out
further, use the 40GbE Quad SFP (QSFP+) ports on the Fabric Interconnects to connect a
second UCS Chassis and up to two Cisco UCS C-series rack-mount servers, as shown in
Figure 1-2 on page 4.
Chapter 1. Implementing Windows File Services on VersaStack
3
Figure 1-2 Shows the maximum scale-up configuration for the VersaStack V5030.
If you require compute and storage beyond the scale-up maximums of the VersaStack V5030
configuration, reference the VersaStack CVDs based on the UCS Classic configuration.
Consider the recent VersaStack with IBM SAN Volume Controller when looking to unlock the
full potential of UCS compute and IBM storage; this architecture enables a single Cisco UCS
domain to scale up to 20 chassis (up to 160 blades) with minimal complexity.
1.3.2 Compute options
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco
Unified Computing System, delivering a scalable and flexible blade server chassis. At the time
of writing, the UCS Mini chassis supports the B22 M3, B200 M3, B420 M3, and B200 M4
blade servers. On the Cisco UCS Mini chassis, the I/O bays to the rear are used to
accommodate the UCS 6324 Fabric Interconnect modules. A passive mid-plane provides up
to 80 Gbps of I/O bandwidth per server slot, providing up to 160 Gbps of I/O bandwidth
across two slots. The chassis is capable of supporting future 40 Gigabit Ethernet standards.
The configuration for the validation in this paper uses the compute power of Cisco UCS B200
M4 Blade Servers. The B200 M4 Blades use the power of the latest Intel Xeon E5-2600 v4
Series processor family CPUs with up to 1.5 TB of RAM (using 64 GB DIMMs), and optionally
two solid-state drives (SSDs) or hard disk drives (HDDs). Note that for this paper the B200
M4 Blades were configured to SAN boot from the Storwize V5030 storage controller.
Information: For more information about the compute configuration used in this paper, see
the VersaStack with Cisco UCS Mini and IBM V5000 2ns Generation Design Guide.
In addition to the Cisco UCS B-Series Blade Servers, several other options are available to
add to the compute power externally, to the UCS Blade Server Chassis, depending on your
budget. The Cisco UCS C-Series M4 Rack Servers are available in three models: C220 M4,
C240 M4. and C440 M4.
Information: See the Cisco UCS C-Series Rack Servers web page.
4
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
1.4 Windows Server 2016
Windows File Services have played an integral role in the data center for decades. Using a
mature, established technology of Server Message Block (SMB), approximately 20% of all
Windows Servers have the File Services role installed. With Windows Server 2016,
enhancements to SMB provide significant performance and security benefits over previous
Windows Server versions. Windows Server 2016 delivers dependable SMB, NFS, and
scale-out file storage services easily and efficiently.
1.4.1 High availability provided by failover clustering
Failover clustering in Windows Server is a method of grouping multiple Windows Server
instances, referred to as nodes, to perform one or more functions or roles. Typically, a failover
cluster will provide high availability to applications, functions, or infrastructure services either
by sharing the role between multiple nodes in an active-active model, or by allowing an
active-passive role to be migrated between nodes at any given time.
In this way, the administrator can temporarily remove a particular Window Server from active
service, for example for maintenance, repair, or upgrade without compromising the availability
of the role it was providing and impacting the wider users or infrastructure.
The IBM Storwize V7000 Unified file solution supported a maximum of two filer modules,
whereas the VersaStack implementation detailed in this document offers scalability of up to
16 Windows Server nodes in a single failover cluster. This scalability provides an increased
expansion opportunity for both the storage capacity and the processing power to cope with
modern workloads in today’s data center.
1.4.2 File sharing methods
Windows Server 2016 has two types of file sharing services, which individually have unique
capabilities that are best suited for specific use cases. Each use case has different
operational storage requirements that adversely impact each another. Depending on the use
case, choosing the right method of file sharing is important in order to ensure the solution is fit
for purpose.
The types of file sharing options are as follows:
򐂰 Scale-Out File Server
򐂰 File Services for General Use
With each file server option, you have the choice of creating either Network File System
(NFS) or Server Message Block (SMB) file shares. NFS is best suited for predominately Linux
based environments, or where a mixture of client devices of different operating systems will
be accessing the file shares, for example both Windows and Linux. However, when working in
predominantly Windows environments, SMB is the preferred file sharing method.
Scale-Out File Server for application data is ideal for server applications that keep files open
for a sustained period of time, doing mostly data operations with infrequent metadata access
on the file system. All file shares are simultaneously available on all Windows Server nodes
and therefore considered active-active. Because they are concurrently accessible through
multiple Windows instances, they are the preferred file server type when deploying either
Hyper-V over SMB or Microsoft SQL Server over SMB.
File Services for General Use incorporates the traditional, established file sharing service that
has been present in Windows Server since the introduction of failover clustering. It is
Chapter 1. Implementing Windows File Services on VersaStack
5
designed for a typical file sharing scenario where the individual files are likely to be read, and
only written when the files are saved, this type of clustered file server, and therefore all the
associated file shares, are only available to one failover cluster node at any given time and
therefore is sometimes referred to as active-passive. This is the preferred file server type for
user home directories, team folders, and as the name suggests, general use.
1.4.3 Feature comparison and suitability
Figure 1-3 compares features of the two types of file sharing services offered by Windows
Server 2016.
Figure 1-3 Windows Server 2016 feature comparison
6
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Feature notes: Folder Redirection, Offline Files, Roaming User Profiles, or Home
Directories generate a large number of writes that must be immediately written to disk
(without buffering) when using continuously available file shares, reducing performance as
compared to general purpose file shares. Continuously available file shares are also
incompatible with File Server Resource Manager and PCs running Windows XP.
Additionally, Offline Files might not transition to offline mode for 3 - 6 minutes after a user
loses access to a share, which might frustrate users who are not yet using the Always
Offline mode of Offline Files.
1.4.4 Comparison of Storwize V7000 Unified to Windows Server 2016
Figure 1-4 compares Storwize V7000 Unified and Windows Server 2016 features.
Figure 1-4 Feature comparison
Chapter 1. Implementing Windows File Services on VersaStack
7
1.4.5 Storage features of Windows Server 2016
This section describes some of the storage features of the scale-out file servers.
Storage Replica
Storage Replica enables storage-agnostic, block-level, synchronous replication between
clusters or servers for disaster preparedness and recovery, and also stretching of a failover
cluster across sites for high availability. Synchronous replication enables mirroring of data in
physical sites with crash-consistent volumes, ensuring zero data loss at the file system level.
Asynchronous replication allows site extension beyond metropolitan ranges.
Storage Quality of Service (QoS)
Storage QoS provides a way to centrally monitor and manage storage performance for virtual
machines using Hyper-V and the Scale-Out File Server roles. The feature automatically
improves storage resource fairness between multiple virtual machines using the same file
server cluster and allows specific minimum and maximum performance goals to be
configured in units of normalized I/O operations per second (IOPS).
Data Deduplication
Data Deduplication is a feature of Windows Server 2016 that can help reduce the impact of
redundant data on storage costs. When enabled, Data Deduplication optimizes free space on
a volume by examining the data on the volume for duplication. When identified, duplicated
portions of the volume’s data set are stored once and are (optionally) compressed for
additional savings. Data Deduplication optimizes redundancies without compromising data
fidelity or integrity.
General-purpose file servers
This section describes some of the storage features of the general-purpose file servers.
Work Folders
With Work Folders users can store and access work files on personal computers and devices,
often referred to as bring-your-own device (BYOD), in addition to corporate PCs. Users gain a
convenient location to store work files, and users can access the files from anywhere.
Organizations maintain control over corporate data by storing the files on centrally managed
file servers, and optionally specifying user device policies such as encryption and lock-screen
passwords.
Offline Files, Folder Redirection, and Roaming User Profiles
Folder Redirection and Offline Files are used together to redirect the path of local folders
(such as the Documents folder) to a network location, while caching the contents locally for
increased speed and availability. Roaming User Profiles is used to redirect a user profile to a
network location.
DFS Replication
With DFS Replication, you can efficiently replicate folders (including those referred to by a
DFS namespace path) across multiple servers and sites. DFS Replication uses a
compression algorithm known as remote differential compression (RDC). RDC detects
changes to the data in a file, and it enables DFS Replication to replicate only the changed file
blocks instead of the entire file.
DFS Namespaces
With DFS Namespaces, you can group shared folders that are located on separate servers
into one or more logically structured namespaces. Each namespace appears to users as a
8
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
single shared folder with a series of subfolders. However, the underlying structure of the
namespace can consist of numerous file shares that are located on separate servers and in
multiple sites.
File Classification
File Classification, also known as File Classification Infrastructure (FCI) provides insight
into your data by automating classification processes so that you can manage your data more
effectively. You can classify files and apply policies based on this classification. Example
policies include dynamic access control for restricting access to files, file encryption, and file
expiration. Files can be classified automatically by using file classification rules or manually
by modifying the properties of a selected file or folder.
File Screens
File Screens help you control the types of files that users can store on a file server. You can
limit the extension that can be stored on your shared files. For example, you can create a file
screen that does not allow files with an MP3 extension to be stored in personal shared folders
on a file server.
File Management Tasks
File Management Tasks enable you to apply a conditional policy or action to files based on
their classification. The conditions of a file management task include the file location, the
classification properties, the date the file was created, the last modified date of the file, or the
last time the file was accessed. The actions that a file management task can take include the
ability to expire files, encrypt files, or run a custom command.
Quotas
With quotas, you can limit the space that is allowed for a volume or folder, and they can be
automatically applied to new folders that are created on a volume. You can also define quota
templates that can be applied to new volumes or folders.
Storage reports
Storage reports are used to help you identify trends in disk usage and how your data is
classified. You can also monitor a selected group of users for attempts to save unauthorized
files.
File systems, protocols, and miscellaneous features
This section describes file systems, protocols, and miscellaneous other features.
Resilient File System (ReFS)
ReFS is a resilient file system that maximizes data availability, scales efficiently to very large
data sets across diverse workloads, and provides data integrity by means of resiliency to
corruption (regardless of software or hardware failures).
Server Message Block (SMB)
SMB is a network file sharing protocol that allows applications on a computer to read and
write to files and to request services from server programs in a computer network. The SMB
protocol can be used on top of its TCP/IP protocol or other network protocols. Using the SMB
protocol, an application (or the user of an application) can access files or other resources at a
remote server. In this way, applications can read, create, and update files on the remote
server. It can also communicate with any server program that is set up to receive an SMB
client request.
Chapter 1. Implementing Windows File Services on VersaStack
9
Storage-class memory
Storage-class memory, such as NVDIMM-N devices, provides performance similar to
computer memory (really fast), but with the data persistence of normal storage drives.
Windows treats storage-class memory similarly to normal drives (just faster), but some
differences exist in the way device health is managed.
BitLocker Drive Encryption
BitLocker Drive Encryption stores data on volumes in an encrypted format, even if the
computer is tampered with or when the operating system is not running. This feature helps
protect against offline attacks, attacks made by disabling or circumventing the installed
operating system, or made by physically removing the hard drive to attack the data
independently of the system.
New Technology File System (NTFS)
NTFS, the primary file system for recent versions of Windows and Windows Server, provides
a full set of features including security descriptors, encryption, disk quotas, and rich
metadata, and can be used with Cluster Shared Volumes (CSV) to provide continuously
available volumes that can be accessed simultaneously from multiple nodes of a failover
cluster.
Network File System (NFS)
NFS provides a file sharing solution for enterprises that have heterogeneous environments
that consist of both Windows and non-Windows computers.
1.5 IBM Storwize V5000 Gen2
The IBM Storwize V5000 Gen2 (2nd generation) solution is a modular entry and midrange
level storage solution which includes the capability to virtualize both its own internal
Redundant Array of Independent Disks (RAID) storage and existing external storage area
network (SAN) attached storage. It is a highly flexible, scalable, easy to use, virtualized hybrid
storage system that is designed to enable midsize organizations to overcome their storage
challenges with advanced functionality.
IBM Storwize V5000 Gen2 offers improved performance and efficiency, enhanced security,
and increased scalability with a set of three models to deliver a range of performance,
scalability, and functional capabilities:
򐂰 IBM Storwize V5010
򐂰 IBM Storwize V5020
򐂰 IBM Storwize V5030
The Storwize V5000 Gen2 provides the flexibility to start small and keep growing while using
existing storage investments. Plus, to enable organizations with midsize workloads to achieve
advanced performance, IBM Storwize V5030F provides a popularly priced all-flash solution.
The three IBM Storwize V5000 Gen2 models offer a range of performance scalability and
functional capabilities. Figure 1-5 on page 11 summarizes features of the Storwize V5000
Gen2 models.
10
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Figure 1-5 Summary of V5000 Gen2 features
IBM Storwize V5000 is designed for software-defined environments and is built with IBM
Spectrum Virtualize software. The IBM Storwize family is an industry-leading solution for
storage virtualization that includes technologies to complement and enhance virtual
environments, delivering a simpler, more scalable, and cost-efficient IT infrastructure.
Capacity scalability is improved with the release of the IBM 2077 Storwize V5000 HD LFF
Expansion Enclosure Model 92F, which enables you to increase the density and capacity of
your Storwize V5000 system. It contains ninety-two 2.5-inch or 3.5-inch SAS drive slots in 5U,
19-inch rack mount enclosure.
Figure 1-6 shows the IBM Storwize V5030 controller, the storage model used for this
VersaStack solution.
Figure 1-6 IBM Storwize V5030 controller
IBM Storwize V5030 control enclosure models offer the highest levels of performance,
scalability, and functionality:
򐂰 Two 6-core processors and up to 64 GB of cache
򐂰 Support for 504 drives per system with the attachment of 20 Storwize V5000 expansion
enclosures and 1008 drives with a two-way clustered configuration
򐂰 External virtualization to consolidate and provide Storwize V5000 capabilities to existing
storage infrastructures
򐂰 IBM Real-time Compression™ for improved storage efficiency
򐂰 Encryption of data at rest stored within the Storwize V5000 system and externally
virtualized storage systems
All Storwize V5000 Gen2 control enclosures include these components:
򐂰 Dual-active, intelligent node canisters with mirrored cache
򐂰 Ethernet ports for iSCSI connectivity
Chapter 1. Implementing Windows File Services on VersaStack
11
򐂰 Support for 16 Gb FC, 12 Gb SAS, 10 Gb iSCSI/FCoE, and 1 Gb iSCSI for additional I/O
connectivity
򐂰 Twelve 3.5-inch (LFF) drive slots or twenty-four 2.5-inch (SFF) drive slots within each 2U,
19-inch rack mount enclosure
򐂰 Support for the attachment of Gen2 Storwize V5000 LFF and SFF 12 Gb SAS expansion
enclosures, including the 2077-92F 5U High Density expansion enclosure
򐂰 Support for a rich set of IBM Spectrum Virtualize functions including thin provisioning, IBM
Easy Tier, IBM FlashCopy®, and remote mirroring
򐂰 Power supplies: 100 - 240V AC or -48V DC
򐂰 Warranty: one or three years with customer replaceable units (CRU) and on-site service
1.5.1 IBM Spectrum Virtualize
All Storwize V5000 functional capabilities are provided through IBM Spectrum Virtualize
Software for Storwize V5000. IBM Spectrum Virtualize is a proven offering that has been
available for many years in IBM SAN Volume Controller, the IBM Storwize family of storage
solutions, IBM FlashSystem® V9000, and VersaStack. Considering these established
storage offerings, more than 130,000 systems, worldwide, are running IBM Spectrum
Virtualize and are delivering better than “five nines” of availability while managing more
than 5.6 exabytes (EB) of data.
Key functions of IBM Spectrum Virtualize include these items:
򐂰 Virtualization of Fibre Channel attached external storage arrays with support for over 400
storage controllers (from IBM and non IBM)
򐂰 Manage virtualized storage as a single storage pool, integrating “islands of storage” and
easing the management and optimization of the overall pool
򐂰 Real-time Compression for in-line, real-time compression to improve capacity utilization
򐂰 Virtual Disk Mirroring for two redundant copies of LUN and higher data availability
򐂰 Stretched Cluster and IBM HyperSwap® for high availability among physically separated
data centers
򐂰 IBM Easy Tier® for automatic and dynamic data tiering
򐂰 Distributed RAID for better availability and faster rebuild times
򐂰 Encryption to help improve security of internal and externally virtualized capacities
򐂰 FlashCopy snapshots for local data protection
򐂰 Remote Mirror for synchronous or asynchronous remote data replication and disaster
recovery through both Fibre Channel and IP ports with offerings that use IBM Spectrum
Virtualize software
򐂰 Clustering for performance and capacity scalability
򐂰 Online, transparent data migration to move data among virtualized storage systems
without disruptions
򐂰 Common “look and feel” with other offerings that use IBM Spectrum Virtualize software
򐂰 IP Link compression to improve usage of IP networks for remote-copy data transmission.
Information: See the IBM Spectrum Virtualize software web page.
12
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
1.5.2 IBM Storwize V5000 Gen2 features
This section describes the features of the IBM Storwize V5000 Gen2. Different models offer a
different range of features. Figure 1-5 on page 11 compares features at a high level.
Mirrored volumes
IBM Storwize V5000 Gen2 provides a storage volume mirroring function, which enables a
volume to have two physical copies. Each volume copy can belong to a different storage pool
and be on a separate physical storage system to provide a high-availability (HA) solution.
Each mirrored copy can be a generic, thin-provisioned, or compressed volume copy.
When a host system issues a write operation to a mirrored volume, IBM Storwize V5000
Gen2 writes the data to both copies. When a host system issues a read to a mirrored volume,
IBM Storwize V5000 Gen2 requests it from the primary copy. If one of the mirrored volume
copies is temporarily unavailable, IBM Storwize V5000 Gen2 automatically uses the
alternative copy without any outage for the host system. When the mirrored volume copy is
repaired, IBM Storwize V5000 Gen2 synchronizes the data again. A mirrored volume can be
converted to a non-mirrored volume by deleting one copy or by splitting away one copy to
create a non-mirrored volume.
The use of mirrored volumes can also assist with migrating volumes between storage pools
that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate
fully allocated volumes to thin-provisioned or compressed volumes without any host outages.
The Volume Mirroring feature is included as part of the base software, and no license is
required.
Thin Provisioning
Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned volume
behaves as though it were a fully allocated volume in terms of read/write I/O. However, when
a volume is created, the user specifies two capacities: the real capacity of the volume and its
virtual capacity.
The real capacity determines the quantity of MDisk extents that are allocated for the volume.
The virtual capacity is the capacity of the volume that is reported to IBM Storwize V5000
Gen2 and to the host servers. The real capacity is used to store the user data and the
metadata for the thin-provisioned volume. The real capacity can be specified as an absolute
value or a percentage of the virtual capacity.
The Thin Provisioning feature can be used on its own to create over-allocated volumes, or it
can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume
feature, also.
A thin-provisioned volume can be configured to auto-expand, which causes the IBM Storwize
V5000 Gen2 to automatically expand the real capacity of a thin-provisioned volume as it
gets used. This feature prevents the volume from going offline. Auto-expand attempts to
maintain a fixed amount of unused real capacity on the volume. This amount is known as
the contingency capacity. When the thin-provisioned volume is initially created, the IBM
Storwize V5000 initially allocates only 2% of the virtual capacity in real physical storage.
The contingency capacity and auto-expand features seek to preserve this 2% of free space
as the volume grows.
If the user modifies the real capacity, the contingency capacity is reset to be the difference
between the used capacity and real capacity. In this way, the autoexpand feature does not
cause the real capacity to grow much beyond the virtual capacity.
Chapter 1. Implementing Windows File Services on VersaStack
13
A volume that is created with a zero-contingency capacity goes offline when it must expand. A
volume with a non-zero contingency capacity stays online until it is used up. To support the
autoexpansion of thin-provisioned volumes, the volumes themselves have a configurable
warning capacity. When the used free capacity of the volume exceeds the warning capacity, a
warning is logged. For example, if a warning of 80% is specified, the warning is logged when
20% of the free capacity remains. This approach is similar to the capacity warning that is
available on storage pools.
A thin-provisioned volume can be converted to either a fully allocated volume or compressed
volume by using volume mirroring (and vice versa).
The Thin Provisioning feature is included as part of the base software, and no license is
required.
Real-time Compression
The Storwize V5030 model can create compressed volumes, allowing more data to be stored
in the same physical space. IBM Real-time Compression can be used for primary active
volumes and with mirroring and replication (FlashCopy, Remote Copy).
Existing volumes can take advantage of Real-time Compression to result in an immediate
capacity saving. An existing volume can be converted to a compressed volume by creating
a compressed volume copy of the original volume followed by deleting the original volume.
No changes to the existing environment are required to take advantage of Real-time
Compression. It is transparent to hosts while the compression occurs within the IBM Storwize
V5000 Gen2 system.
Real-time Compression is available on only the Storwize V5030 model. The Storwize V5030
model has the additional memory upgrade (32 GB for each node canister). The feature is
licensed per enclosure. Real-time Compression is not available on the Storwize V5010 or
Storwize V5020 models.
Easy Tier
An integral feature of the IBM Spectrum Virtualize software stack is its ability to take
advantage of multiple storage tiers and dynamically and automatically relocate storage
extents to the appropriate level. For example, frequently accessed extents are identified as
hot and will be promoted to the faster performing tier to reduce I/O latency and improve
performance on subsequent access, whereas extents that have been accessed infrequently
are demoted to a lower tier of storage designed for capacity rather than performance.
Enhancements to the Easy Tier feature in recent software releases have introduced support
for three tiers of storage: Flash, Enterprise, and Nearline. Although Easy Tier can support
three tiers, a common practice is to employ a selection of any two storage tiers, for example
creating an Easy Tier enabled storage pool consisting of both Flash and Nearline managed
disks (MDisks).
Given the wide array of available hard disk drive options, as detailed in 1.5.4, “Drive
compatibility” on page 21, choosing the correct mix of drives and the correct data placement
to achieve optimal performance at low cost is critical. Typically, the maximum value can be
derived by placing “hot” data with high I/O density and low response time requirements on
Flash. Enterprise class disks are targeted for “warm” and Nearline for “cold” data that is
accessed sequentially and at lower rates.
14
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Storage Migration
By using the IBM Storwize V5000 Gen2 Storage Migration feature, you can easily move data
from other existing Fibre Channel attached external storage to the internal capacity of IBM
Storwize V5000 Gen2. You can migrate data from other storage to the IBM Storwize V5000
Gen2 storage system to realize the benefits of IBM Storwize V5000 with features, such as the
easy-to-use GUI, internal virtualization, thin provisioning, and copy services.
The Storage Migration feature is included in the base software, and no license is required.
FlashCopy
The FlashCopy feature copies a source volume to a target volume. The original contents of
the target volume is lost and after the copy operation starts the target volume has the
contents of the source volume as it existed at a single point in time. Although the copy
operation completes in the background, the resulting data at the target appears as though the
copy was made instantaneously. FlashCopy is sometimes described as an instance of a
time-zero (T0) copy or point-in-time (PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
IBM Storwize V5000 Gen2 also permits multiple target volumes to be flash copied from the
same source volume. This capability can be used to create images from separate points in
time for the source volume, and to create multiple images from a source volume at a common
point in time. Source and target volumes can be any volume type (generic, thin-provisioned,
or compressed).
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. IBM Storwize V5000 Gen2 supports multiple targets and multiple
rollback points.
The FlashCopy feature is licensed per enclosure.
Remote Copy
Remote Copy can be implemented in one of two modes: synchronous or asynchronous. With
IBM Storwize V5000 Gen2, Metro Mirror and Global Mirror are IBM branded terms for the
functions that are synchronous Remote Copy and asynchronous Remote Copy respectively.
By using the Metro Mirror and Global Mirror copy services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same system or on
two different systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary
and the other volume is designated as the secondary. Host applications write data to the
primary volume, and updates to the primary volume are copied to the secondary volume.
Normally, host applications do not perform I/O operations to the secondary volume. The
Metro Mirror feature provides a synchronous copy process. When a host writes to the primary
volume, it does not receive confirmation of I/O completion until the write operation completes
for the copy on the primary and secondary volumes. This design ensures that the secondary
volume is always up-to-date with the primary volume if a failover operation must be
performed.
Chapter 1. Implementing Windows File Services on VersaStack
15
The Global Mirror feature provides an asynchronous copy process. When a host writes to the
primary volume, confirmation of I/O completion is received before the write operation
completes for the copy on the secondary volume. If a failover operation is performed, the
application must recover and apply any updates that were not committed to the secondary
volume. If I/O operations on the primary volume are paused for a brief time, the secondary
volume can become an exact match of the primary volume.
Global Mirror can operate with or without cycling. When it is operating without cycling, write
operations are applied to the secondary volume as soon as possible after they are applied to
the primary volume. The secondary volume is less than 1 second behind the primary volume,
which minimizes the amount of data that must be recovered in a failover. However, this
approach requires that a high-bandwidth link is provisioned between the two sites. When
Global Mirror operates with cycling mode, changes are tracked and, where needed, copied to
intermediate change volumes. Changes are transmitted to the secondary site periodically.
The secondary volumes are much further behind the primary volume, and more data must be
recovered in a failover. Because the data transfer can be smoothed over a longer time period,
lower bandwidth is required to provide an effective solution.
The IBM Remote Copy feature is licensed for each enclosure.
IP replication
IP replication enables the use of lower-cost Ethernet connections for remote mirroring. The
capability is available as a chargeable option on all Storwize family systems. The function is
transparent to servers and applications in the same way that traditional Fibre Channel-based
mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror, and Global
Mirror with Change Volumes) are supported.
Configuration of the system is straightforward. The Storwize family systems normally find
each other in the network, and they can be selected from the GUI.
IP replication includes Bridgeworks SANSlide network optimization technology, and it is
available at no additional charge. Remember, Remote Mirror is a chargeable option but the
price does not change with IP replication. Existing Remote Mirror users have access to the
function at no additional charge.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many “hops”
between switches and other appliances in the network. Traditional replication solutions
transmit data, wait for a response, and then transmit more data, which can result in network
utilization as low as 20% (based on IBM measurements). And this scenario gets worse as the
latency gets longer.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires
no separate appliances, no additional cost, and no configuration steps. It uses artificial
intelligence (AI) technology to transmit multiple data streams in parallel, adjusting
automatically to changing network environments and workloads. SANSlide improves network
bandwidth utilization up to 3x so clients can deploy a less costly network infrastructure or take
advantage of faster data transfer to speed up replication cycles, improve remote data
currency, and enjoy faster recovery.
IP replication can be configured to use any of the available 1 GbE or 10 GbE Ethernet ports
(apart from the technician port) on IBM Storwize V5000 Gen2.
16
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
External virtualization
By using this feature, you can consolidate FC SAN-attached disk controllers from various
vendors into pools of storage. In this way, the storage administrator can manage and
provision storage to applications from a single user interface and use a common set of
advanced functions across all of the storage systems under the control of the IBM Storwize
V5000 Gen2.
The External Virtualization feature is licensed per disk enclosure.
Encryption
IBM Storwize V5000 Gen2 provides optional encryption of data-at-rest functionality, which
protects against the potential exposure of sensitive user data and user metadata that is
stored on discarded, lost, or stolen storage devices. Encryption can be enabled and
configured only on the Storwize V5020 and Storwize V5030 enclosures that support
encryption. The Storwize V5010 does not offer encryption functionality.
Encryption is a licensed feature that requires a license key to enable it before it can be used.
Distributed RAID
Distributed RAID was introduced in IBM Spectrum Virtualize 7.6.0 and allows a RAID 5 or
RAID 6 array to be distributed over a larger set of drives than previously capable. Given the
increased capacity of both traditional spinning and solid-state drives, the rebuild times
associated with drive failures are increasing, exposing the RAID array to added risk of failure
if a further drive fails, and distributed RAID can alleviate this.
Consider the example shown in Figure 1-7.
Figure 1-7 A simplified RAID 6 array
This traditional RAID 6 example shows five active drives and two spare drives. From a
performance perspective, in an array consisting of spinning disks, I/O is being serviced by five
spindles while two spindles remain unused. Similarly, in a solid-state drive (SSD) array, the
financial cost and performance attributes associated to the spare drives are wasted until a
rebuild operation is required.
Chapter 1. Implementing Windows File Services on VersaStack
17
In a failure scenario, the rebuild process reads from the four surviving drives and writes to a
single spare drive, as shown in Figure 1-8.
Figure 1-8 A failure scenario in a traditional RAID 6 Array
Therefore, the rebuild process is limited by the write performance of the single spare drive.
While the rebuild is processing, the array is vulnerable to an outage if any of the remaining
drive members fail. Figure 1-9 shows the estimated rebuild times of a traditional RAID array.
Figure 1-9 Estimated array rebuild times for various spinning drive configurations
18
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Distributed RAID arrays benefit from distributed spare striping where an allocation of spare
capacity is reserved on each distributed RAID array drive member instead of having unused
spare drives remain idle. In the case of a drive failure, the rebuild process reads from multiple
surviving drives and writes to multiple drives, therefore distributing the workload across
multiple targets in parallel (Figure 1-10).
Figure 1-10 Distributed RAID 6 array
In Figure 1-10, notice how each data stripe is distributed across multiple drives but not all
drives in the array. If a drive fails, the data that is not impacted by the failed drive does not
need to be rebuilt, and so, to optimize the rebuild time of the array, only the data that needs to
be rebuilt will be processed.
The V7.7.1 distributed RAID 6 solution has been tested to show that the I/O performance of
distributed RAID 6 is faster than traditional RAID 5 in almost all workloads. This enhanced
performance, coupled with the improved rebuild times over traditional RAID arrays outline why
distributed RAID is the preferred method of storage provisioning in modern Storwize
installations.
Microsoft Offloaded Data Transfer (ODX)
Modern Microsoft Windows operating systems can use the power of the embedded support of
ODX in all storage systems powered by IBM Spectrum Virtualize. This feature introduces
efficiencies in the handling of storage intensive tasks that would typically introduce latency,
fabric saturation and impact performance to the overall storage environment.
Essentially, ODX is a method of offloading the processing of storage-intensive tasks from the
host operating system to the underlying storage system. Typically, ODX is beneficial in
Hyper-V environments where virtual machine (VM) hard drives (VHD or VHDX files) reside on
Cluster Shared Volumes (CSVs), and where data copy operations such as VM clones,
relocations, and template deployments can processed directly by the storage system.
Similarly, when creating new “fixed sized” VHD or VHDX files, ODX is used to proactively
“zero” the underlaying blocks for data security before presenting the storage to a VM.
Although Hyper-V is the primary use case for this feature, the same benefit applies to
standard file copy operations on any file system backed by storage presented by IBM
Spectrum Virtualize, for example, large ISO image files, compressed file archives or data
Chapter 1. Implementing Windows File Services on VersaStack
19
backup files. Testing has shown a 36% performance increase of a file copy workload with
ODX enabled.
In both cases, without ODX, the storage operation can consume resources and bandwidth on
the SAN fabric, occupy memory and CPU cycles on the Windows operating system. With
ODX enabled, a request for the storage operation is constructed by the Windows operating
system and sent to the storage system via an extension of the T10 standard XCOPY SCSI
command.
Information: More information about ODX is in the white paper, IBM Spectrum Virtualize
and IBM Storwize 7.5 with Microsoft Windows Server Offloaded Data Transfer (ODX).
Host clusters
IBM Spectrum Virtualize V7.7.1 introduced the capability to group multiple host objects as a
consolidated object to simplify the administration and maintenance of clustered operating
systems that require shared storage. Initially, this feature was only available through the
command-line interface (CLI) of the IBM Spectrum Virtualize management interface, but GUI
support for host clusters was added in the V7.8.1 software release.
The following CLI commands were added to handle host clusters:
mkhostcluster
lshostcluster
rmhostcluster
addhostclustermember
lshostclustermember
rmhostclustermember
mkvolumehostclustermap
lshostclustervolumemap
rmvolumehostclustermap
Creating a host cluster
Listing host clusters
Removing the host cluster
Adding a member to the host cluster
Listing a host cluster member
Removing a host cluster member
Assigning a volume to the host cluster
Listing a host cluster for mapped volumes
Unmapping a volume from the host cluster
IO Throttling
Although an ODX workload frees up resources on the Windows hosts, the servers have no
visibility of the resource limitations or consumption on the storage system. Depending on the
environment and workload, the amount of offloaded I/O generated can impact the
performance of the underlaying arrays. Therefore, limiting the amount of offloaded I/O that
each Storwize node canister can process will alleviate potential performance degradations on
the system, while still freeing up resources on the Windows Servers.
In IBM Spectrum Virtualize V7.7.1, the throttling feature is enhanced to include support for I/O
throttling on more objects.
The full list of items that can throttled are as follows:
򐂰
򐂰
򐂰
򐂰
򐂰
Hosts
Host clusters
vdisk
Offload I/O (for example VAAI XCOPY and ODX)
mdiskgrp
Throttles can be implemented to limit I/O on either bandwidth (in MB) or IOPS.
Information: Read more about I/O throttling in IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
20
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
1.5.3 Storage scalability
The VersaStack design provides flexible storage scaling with the Storwize V5030 storage
controller. The Storwize V5030 storage controller provides 12 Gb SAS internal dual-port drive
connectivity, and each control enclosure also has four 12 Gb SAS x 4 (Mini-SAS HD
SFF-8644) ports (2 ports per node canister) for 12 Gb SAS expansion enclosure connectivity.
The Storwize V5030 storage controller supports the attachment of up to 20 Large Form
Factor (LFF), 20 Small Form Factor (SFF) or 8 High Density (HD) expansion enclosures with
a maximum of 760 drives per storage controller, providing more than 11 petabytes (PB) of
storage per system. Intermix of LFF and SFF enclosures are supported. The expansion
enclosures can be added to the system nondisruptively.
Each Storwize V5000 Gen2 expansion unit includes two expansion canisters. Each canister
provides 12Gb SAS connectivity to the internal drives and two external 12 Gb SAS x 4 ports
(MiniSAS HD SFF-8644 connectors labelled Port 1 and Port 2) that are used to connect the
IBM Storwize V5030 node canisters and also for connecting the expansion enclosures
between each other. One of the expansion ports (port 1 or port 2) on the left and right node
canisters is connected to the port 1 on the left and right expansion canisters, respectively. The
port 2 on the left and right expansion canisters is connected to the port 1 on the left and right
expansion canisters in the adjacent enclosure, respectively, and so on.
Information: See the Connecting SAS cables to Storwize V5000 Gen2 expansion
enclosures topic in IBM Knowledge Center.
1.5.4 Drive compatibility
Storwize V5000 supports the complete range of data storage requirements, from highly
utilized applications to high-capacity, low usage applications with hybrid or all-flash solutions.
The following high-performance Enterprise class disk drives are supported:
򐂰
򐂰
򐂰
򐂰
򐂰
300 GB 15,000 rpm
600 GB 15,000 rpm
900 GB 10,000 rpm
1.2 TB 10,000 rpm
1.8 TB 10,000 rpm
The following high-capacity, archival-class Nearline drives are supported:
򐂰
򐂰
򐂰
򐂰
򐂰
2 TB 7,200 rpm
4 TB 7,200 rpm
6 TB 7,200 rpm
8 TB 7,200 rpm
10 TB 7,200 rpm
The following Flash drive capacities are supported:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
400 GB
800 GB
1.6 TB
1.92 TB
3.2 TB
3.84 TB
7.68 TB
15.36 TB
Chapter 1. Implementing Windows File Services on VersaStack
21
All drives are dual-port and hot-swappable and all drives types can be intermixed within the
enclosure they are designed for, or distributed across multiple enclosures. Also, both LFF and
SFF expansion enclosures can be intermixed behind the SFF control enclosure.
Using the various supported expansion enclosures and the large selection of drive types and
capacities, the Storwize V5000 can truly offer scalability and performance to cater for all
application and workload requirements.
22
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
2
Chapter 2.
Configuring your VersaStack for
Windows File Services
This chapter describes configuration of the following components:
򐂰 Configuring IBM Storwize V5030
򐂰 Configuring Cisco UCS compute
򐂰 Configuring Windows Server 2016
Information: To prevent the duplication of effort, this chapter references the VersaStack
deployment guide (VersaStack with Cisco UCS Mini and IBM Storwize V5000 Gen2, Direct
Attached SAN Storage) for configuring the Cisco UCS for this VersaStack environment.
© Copyright IBM Corp. 2017. All rights reserved.
23
2.1 Configuring IBM Storwize V5030
As previously stated, this paper uses the “VersaStack with Cisco UCS Mini and IBM Storwize
V5000 Gen2, Direct Attached SAN Storage” CVD as a basis for the VersaStack Windows File
Services configuration. Because that CVD details the initial setup and configuration of the
Storwize V5000 storage system, this paper covers only the additional steps needed to create
and facilitate the Windows File Services environment.
2.1.1 System overview
Figure 2-1 shows the Fibre Channel and Ethernet cable connectivity diagram for the
converged VersaStack system.
Figure 2-1 Connectivity for VersaStack and the IBM Storwize V5030 Storage Controller
2.1.2 Allocating storage capacity
Be sure to follow the steps outlined in section “IBM Storwize V5000 Initial Configuration” (of
the “VersaStack with Cisco UCS Mini and IBM Storwize V5000 Gen2, Direct Attached SAN
Storage” CVD) before continuing with this paper. That section guides you through
configuration of the storage environment in preparation for the Windows File Services
solution.
Storage pools
Before you create volumes to present to the Windows file servers, first create storage pools,
define RAID configuration, and allocate storage capacity.
24
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create a storage pool:
1. Access the Pools menu in the navigation dock, and select Pools (Figure 2-2).
Figure 2-2 The Pools menu in the navigation dock
2. Click Create (Figure 2-3).
Figure 2-3 Click Create to create a new storage pool
Chapter 2. Configuring your VersaStack for Windows File Services
25
3. To aid with future storage administration, give your pool a distinguishable name based on
the storage tier (Figure 2-4).
Figure 2-4 Use a clear and concise naming convention
4. If required, click View more details to monitor the progress of the task (Figure 2-5).
Figure 2-5 Task completed
5. Finally, click Close to complete the process.
26
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create the distributed RAID array
With the Pool created, create a distributed RAID to add capacity to the storage pool:
1. Right-click the pool name you created in the previous step, and select Add Storage
(Figure 2-6).
Figure 2-6 Creating arrays to add to the storage pools
Chapter 2. Configuring your VersaStack for Windows File Services
27
2. Do the following steps, as shown in Figure 2-7:
a. In the Assign Storage to Pool pane, select Internal Custom.
b. In the Drive Assignment pane the available drive classes and the number of drives
available are listed. Click in the RAID selection box and select Distributed RAID-6.
The default profiles are designed to offer optimum performance characteristics for the
distributed RAID configuration.
c. Click Assign to complete the process.
Figure 2-7 Creating a new distributed RAID array
28
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
3. Click View more details to monitor the task status (Figure 2-8).
Figure 2-8 Task completed
4. Click Close to complete the task.
2.1.3 Microsoft Offloaded Data Transfer (ODX)
Configure the Storwize V5000 system and Windows Server 2016 for ODX.
Configure the Storwize V5000 system for ODX
On the Storwize systems, ODX is disabled by default. To enable ODX on the Storwize
system, connect to the management command-line interface (CLI) and run the chsystem CLI
commands. These are the commands you can run:
򐂰 Enable ODX:
svctask chsystem -odx on
򐂰 Disable ODX:
svctask chsystem -odx off
򐂰 View the current system and verify the status of ODX:
svcinfo lssystem | grep odx
Chapter 2. Configuring your VersaStack for Windows File Services
29
Configure Windows Server 2016 for ODX
On Windows Server 2016, ODX is enabled by default. However, if you are in doubt, you can
check the status or enable at the server, by opening a Windows PowerShell session as an
administrator and use the following commands:
򐂰 Verify that ODX is enabled at the server; use the following command in an elevated
PowerShell window and check for a value of 0:
Get-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name
"FilterSupportedFeaturesMode"
򐂰 Enable ODX if it is not currently enabled:
Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name
"FilterSupportedFeaturesMode" -Value 0
򐂰 Display the status of an ODX enabled storage system; use the following PowerShell
command:
Get-OffloadDataTransferSetting | Get-StorageSubSystem
Note: If volumes were already created and mapped to the Windows hosts before enabling
ODX on the storage system, these volumes must be unmapped and remapped to the
Windows hosts after enabling ODX on the Storwize system. The reason is because of a
current limitation within Windows to act upon unit attentions sent by the storage; Windows
recognizes ODX status for volumes only during storage enumeration.
Apply any Microsoft failover cluster-specific hotfixes or patches, some of which have ODX
implications. Most of the patches can be applied with the Windows cluster-aware updating
feature. See the Microsoft hotfixes and updates.
Volumes should be formatted with NTFS and use a 32k block size for best performance.
2.1.4 Creating host clusters and hosts
To simplify the host cluster and host configuration process, create the host cluster object first,
and then create individual host objects to represent each physical Windows Server instance,
defined as a member of the host cluster.
Use the command-line interface (CLI) or graphical user interface (GUI).
Create a host cluster by using the CLI
To create a host cluster, use the mkhostcluster command:
svctask mkhostcluster -name WindowsFileServers
Host cluster, id [0], successfully created
30
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create a host cluster by using the GUI
To create a host cluster, use the GUI as follows:
1. Select the Hosts icon in the navigation dock, and select Host Clusters (Figure 2-9).
Figure 2-9 Locating the Host Cluster section of the Navigation Dock
2. Click Create Host Cluster (Figure 2-10).
Figure 2-10 Click Create Host Cluster to define a new host cluster
Chapter 2. Configuring your VersaStack for Windows File Services
31
3. Define a name for the host cluster object (Figure 2-11), and click Next.
Figure 2-11 Use clear and concise naming convention for your host cluster
4. Click View more details to monitor the progress (Figure 2-12).
Figure 2-12 Task completed
32
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Define hosts
This section assumes that host zoning was completed by following basic recommendations
to ensure that each host has at least two virtual host bus adapter (vHBA) ports, that each
adapter port is on a separate switch fabric, or at minimum in a separate VSAN, and that each
adapter port has connectivity to both canisters. This setup ensures four paths for fail over and
fail back.
Create a host object by using the CLI
Create a Fibre Channel host:
1. Rescan the SAN on Storwize V5000 by using the detectmdisk command:
IBM_Storwize: vs02-v5030:superuser>detectmdisk
If the zoning was implemented appropriately, the new worldwide port names (WWPNs) will
be visible to the Storwize V5000 system after running the detectmdisk command.
2. List the candidate WWPNs and identify the WWPNs belonging to the new host:
IBM_Storwize: vs02-v5030:superuser>lshbaportcandidate
id
2101001B32BA36B4
2100001B329A36B4
Note: If no WWPNs are listed after running lshbaportcandidate, verify the zoning
configuration on the SAN fabric.
3. Run the mkhost command with the required parameters:
svctask mkhost -name FC-Host-Infra-01 -fcwwpn 2100001B329A36B4:2101001B32BA36B4
-hostcluster WindowsFileServers
Host, id [7], successfully created
Create a host object by using the GUI
Complete these steps:
1. Locate the Hosts menu icon in the navigation dock and select the Hosts option.
For each Windows Server, a separate host object must be defined.
2. Click Add Host to start creating the first host object.
3. The Add Host window opens (Figure 2-13 on page 34).
Provide the following information and then click Add:
– In the Required Fields section, enter the name for the first Windows Server and select
the WWPN of each associated vHBA port as specified in “Gather Necessary WWPN
Information” of the VersaStack deployment guide (VersaStack with Cisco UCS Mini and
IBM Storwize V5000 Gen2, Direct Attached SAN Storage).
– In the Optional Fields section, specify the host cluster created in the previous step.
Chapter 2. Configuring your VersaStack for Windows File Services
33
Figure 2-13 Detailing the host configuration options
4. Click View more details to monitor the progress of the task (Figure 2-14).
Figure 2-14 Task completed
5. Click Close, and repeat these steps for each Windows Server.
34
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
In the navigation dock, go to the Host Clusters section. The status of the host cluster object is
displayed (Figure 2-15).
Figure 2-15 Host cluster is now online
The host cluster can be in any of the following states:
Online
All hosts and host HBA ports are active.
Host Degraded
One or more of the host cluster members have an inactive
HBA port.
Host Cluster Degraded
One of more of the host cluster members are offline.
Offline
All host cluster members are offline or there are no host cluster
members.
2.1.5 Creating SAN boot volumes and mappings
The host clusters feature allows for simplified management of volumes shared between
multiple host objects. It ensures that any shared mappings are automatically inherited when
more hosts are added to the host cluster and maintains consistent SCSI IDs across all host
cluster members.
However, volumes can still be mapped to individual hosts, which will be listed as private
mappings. These are not additionally mapped to all host cluster members and so are best
suited for SAN boot configurations and other scenarios where traditional mappings are
required.
Chapter 2. Configuring your VersaStack for Windows File Services
35
Create SAN boot volumes for the Windows installation
Create dedicated SAN boot volumes for each Windows Server:
1. In the navigation dock, open the Volumes menu, and select Volumes (Figure 2-16).
Figure 2-16 Create volumes for the Windows installation
2. Click Create Volumes (Figure 2-17).
Figure 2-17 Creating a volume for the operating system
36
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
3. Follow these steps; see Figure 2-18:
a. In the Quick Volume Creation section of the panel, select Basic volume.
If more specific volume configurations are required, consider selecting Custom volume
from the Advanced section of the panel.
b. Define a volume capacity and name for the SAN boot volume.
Capacity: The absolute minimum storage requirements for a Windows Server 2016
core installation is 32 GB. To allow for application installations, system paging files,
and Windows updates, a capacity of 150 GB is suggested.
c. Click Create and Map.
Figure 2-18 Define the volume attributes
Chapter 2. Configuring your VersaStack for Windows File Services
37
4. Click View more details to monitor the task progress (Figure 2-19).
Figure 2-19 Task completed
38
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
5. The Create Mapping window opens (Figure 2-20). Follow these steps:
a. Select Hosts.
b. Highlight the individual host object that you want to map the SAN boot volume to.
c. Click Next.
Figure 2-20 Specify the mapping target
6. Identify the New Mapping indicator and confirm information in the Map Volumes Summary
window (Figure 2-21 on page 40). Then, click Map Volumes.
Chapter 2. Configuring your VersaStack for Windows File Services
39
Figure 2-21 Verify the mapping information summary
7. Click View more details to monitor the task progress (Figure 2-22).
Figure 2-22 Task completed
8. Repeat these steps for each Windows Server instance until all hosts have their own SAN
boot volume mapped.
40
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
2.1.6 I/O throttling
An offloaded I/O throttle will apply to both Storwize node canisters and will limit the amount of
bandwidth or IOPS allocated to the offloaded ODX workload generated by the Windows File
Services. To define an I/O throttle, run the following command in the CLI management
interface of the Storwize V5000:
svctask mkthrottle -type offload -bandwidth 500
Throttle, id [0], successfully created.
A host cluster throttle will limit the I/O resources allocated to a specified host cluster. This can
be useful if running a test cluster on the same storage system as a live production system
where you want to protect the production workload from any impact. Run this command:
svctask mkthrottle -type hostcluster -iops 3000 -hostcluster 0
Throttle, id [1], successfully created.
To list the active throttles on the system, run the following command:
svcinfo lsthrottle
throttle_id throttle_name object_id object_name
throttle_type IOPs_limit bandwidth_limit_MB
0
throttle0
offload
500
1
throttle1
0
WindowsFileServers hostcluster
3000
2.1.7 Adding a storage expansion enclosure for scalability
To increase the number of drives in the Storwize V5000 storage system and therefore
increase the total storage capacity available to the Windows failover cluster, complete the
following steps to add a Storwize V5000 expansion enclosure:
1. Change to the Monitoring tab and select System.
If no new hardware is shown, check your cabling to ensure that the new expansion
enclosure is connected correctly and refresh the window.
2. In the upper left corner of the main window, select Actions → Add Enclosures
(Figure 2-23).
Figure 2-23 Adding an expansion enclosure
Chapter 2. Configuring your VersaStack for Windows File Services
41
Alternatively, you can click the available expansion enclosure. If the enclosure is cabled
correctly, the wizard identifies the candidate expansion enclosure.
Select the expansion enclosure and click Next (Figure 2-24).
Figure 2-24 Select the expansion enclosure to add
3. Select the expansion enclosure and click Actions. Click Identify to turn on the identify
LEDs of the new enclosure, if required. Otherwise, click Next.
4. The new expansion enclosure is added to the system (Figure 2-25). Click Finish to
complete the operation.
Figure 2-25 Adding expansion summary
42
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
After the expansion enclosure is added, the IBM Storwize V5000 Gen2 shows the
management GUI that contains two enclosures (Figure 2-26). Any drives in the expansion
enclosure are now ready to use.
Figure 2-26 Expansion enclosure has been added to the system
2.2 Configuring Cisco UCS compute
To provision the Cisco UCS Manager and B-Series compute blades, several steps are
necessary; follow these steps precisely to avoid improper configuration. As mentioned
previously, these steps reference the VersaStack V5030 deployment guide in order to bring
the VersaStack configuration online. Then more details are examined for deploying Windows
Server described in this paper.
Information: For a detailed list of initial configuration steps, see the Cisco UCS Compute
Configuration chapter in the VersaStack V5030 deployment guide.
That chapter was written specifically for VMware ESXi hosts configuration. If you intend to
deploy your Windows Servers into a VMware environment, you do not need to make any
changes to these initial steps. Otherwise, follow the steps in 2.2.1, “Changes and additions
to the VersaStack V5030 deployment guide” on page 43 (the next section).
2.2.1 Changes and additions to the VersaStack V5030 deployment guide
During the initial configuration steps, make the following changes and additions as required.
Create a SAN Connectivity Policy
In the “Create a SAN Connectivity Policy” topic of the VersaStack V5030 deployment guide,
replace references to VMware with Windows in steps 10 and 16 (Figure 2-27 on page 44).
Chapter 2. Configuring your VersaStack for Windows File Services
43
Figure 2-27 vHBA Policy window
Create VLANs
In addition to the necessary steps in the “Create VLANs” topic of the VersaStack V5030
deployment guide, a VLAN is required for a private, node-to-node communication of the
Windows Failover Cluster File Servers. Follow steps 1 - 9 of the “Create VLANs” topic,
inserting the relevant name for your Node-to-Node VLAN at step 7 (Figure 2-28).
Figure 2-28 Create VLANs window
44
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create vNIC Templates
See the “Create vNIC Templates” topic of the VersaStack V5030 deployment guide. Create a
vNIC template that uses the Node-to-Node VLAN you previously configured.
For steps 1 - 12, see Figure 2-29:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies → root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter vNIC_Node2Node_A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for the Redundancy Type.
9. Leave Peer Redundancy Template as <not set>.
10.Under Target, make sure that the VM checkbox is not selected.
11.Select Updating Template as the Template Type.
12.Under VLANs, select the check boxes for Node-to-Node VLAN.
Figure 2-29 vNIC Template window: Steps 1-12
Chapter 2. Configuring your VersaStack for Windows File Services
45
For steps 13 - 19 (to create the vNIC template for Fabric-B), see Figure 2-30:
13.Set Node-to-Node as the Native VLAN.
14.For MTU, enter 9000.
15.In the MAC Pool list, select MAC_Pool_B.
16.In the Network Control Policy list, select Enable_CDP.
17.Click OK to create the vNIC template.
18.Click OK.
19.Follow these similar steps for the vNIC_Node-to-Node_B Template.
Figure 2-30 vNIC Template window steps 13 - 19
Create Service Profile Template
In the “Create Service Profile Template” topic of the VersaStack V5030 deployment guide,
templates for booting from Fabric A and Fabric B are created. Clone the existing service
templates so that you have templates specific to your Windows hosts:
1. Connect to the UCS Manager, and then click the Servers tab in the navigation pane.
2. Select Service Profile Templates → root → Service Template
VM-Host-Prod-Fabric-A.
46
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
3. Right-click VM-Host-Infra-Fabric-A and select Create a Clone (Figure 2-31).
Figure 2-31 Create a Clone from template
4. Create a clone of VM-Host-Fabric-A and name it Windows-Host-FC-A.
5. Create a clone of VM-Host-Fabric-B and name it Windows-Host-FC-B.
Now that you have Service Profile Templates for your Windows hosts, modify the templates to
reflect the Failover Cluster Node-to-Node VLAN that you configured previously:
1. Connect to the UCS Manager, and click the Servers tab in the navigation pane.
2. Select Service Profile Templates → root and highlight the Service Template
Windows-Host-FC-A (Figure 2-32).
Figure 2-32 Windows host service template selection
3. From the selection pane on the right side, follow these steps (Figure 2-33 on page 48):
a.
b.
c.
d.
Select the Network tab.
Scroll to the vNICs section.
Select vNIC-B.
Click Modify.
Chapter 2. Configuring your VersaStack for Windows File Services
47
Figure 2-33 Modify vNIC policy for vNIC-B
4. In the Modify vNIC window (Figure 2-34), select the vNIC_Node2Node_B vNIC Template,
and then click OK.
Figure 2-34 Modify vNIC window
You are now ready to create service profiles from your service profile templates.
48
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create Service Profiles
Create service profiles from the service profile template, complete the following steps:
1. Connect to the UCS Manager, and then click the Servers tab in the navigation pane.
2. Select Service Profile Templates → root → Service Template Windows-Host-FC-A.
3. Right-click Windows-Host-FC-A and select Create Service Profiles from Template. The
Create Service Profiles From Template dialog window opens (Figure 2-35).
.
Figure 2-35 Create Service Profile from Template dialogue window
4. Enter VS02-W2K16-0 as the service profile naming prefix.
5. Keep 1 as the Name Suffix Starting Number.
6. Keep 2 as the number of instances.
7. Click OK to create the service profiles.
8. Click OK in the confirmation message to provision two VersaStack service profiles.
Storage LUN Mapping
The LUN mapping for the SAN boot volumes was covered in 2.1.5, “Creating SAN boot
volumes and mappings” on page 35.
Windows Server 2016 SAN Boot Installation
These steps focus on how to use the built-in keyboard, video, mouse (KVM) console and
virtual media features in Cisco UCS Manager to map remote installation media to individual
servers and connect to their boot logical unit numbers (LUNs). In this method, these steps
use the Windows Server 2016 installation ISO file.
Note: Although the majority of steps in the “VMware vSphere Installation and Setup” topic
of the VersaStack V5030 deployment guide remain the same for Windows SAN Boot
Installation, the details are presented here again to include the specifics for Windows
Server 2016.
Open a web browser and enter the IP address for the Cisco UCS Manager. This step
launches the Cisco UCS Manager application. Then, complete these steps:
1. Log in to Cisco UCS Manager by using the admin user name and password.
2. From the main menu, click the Servers tab.
3. Select Servers → Service Profiles → root → VS02-W2K16-01.
Chapter 2. Configuring your VersaStack for Windows File Services
49
4. Right-click VS02-W2K16-01 and select KVM Console (Figure 2-36).
Figure 2-36 KVM Console selection
5. Select Servers → Service Profiles → root → VS02-W2K16-02.
6. Right-click VS02-W2K16-02 and select KVM Console Actions → KVM Console.
You now have two KVMs running: one for VS02-W2K16-01 and another for VS02-W2K16-02.
Complete the following steps on each Windows host:
1. In the KVM window, click Virtual Media.
2. Click Activate Virtual Devices, select Accept this Session, then click Apply.
3. Select Virtual Media, Map CD/DVD, then browse to the Windows Server installer ISO
image file and click Open.
4. The Virtual Media - Map CD/DVD dialog opens (Figure 2-37). Click Map Device to map
the newly added image.
Figure 2-37 Virtual Media window
5. Select Reset, then OK, and allow a power cycle. Click the KVM tab to monitor the server
boot.
50
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
6. After reviewing the end-user license agreement (EULA), accept the license terms and
click Next.
7. Select Custom: Install Windows only (advanced).
8. Select Custom (advanced) installation.
Note: The Cisco fNIC drivers are required for the Windows Server SAN boot
installation. This is also a good time to load the eNIC drivers. You can obtain the
drivers from the Cisco Download Software web page:
https://software.cisco.com/download/release.html?mdfid=283853163&softwareid=
283853158&release=2.2(3o)&relind=AVAILABLE&rellifecycle=&reltype=latest
9. In the KVM window, click Virtual Media, and then select Create Image.
10.In the Create Image from Folder dialog (Figure 2-38), select the Windows Server 2016
drivers as the source folder and choose an output folder for the .img file.
Figure 2-38 Create driver image file
11.In the Virtual Media Session manager, select Map Removable Disk.
12.In the dialog window (Figure 2-39), select Windows Drivers (.img file). In the pop-up,
click Yes to confirm. Then, click Map Device.
Figure 2-39 Map Removable Disk
Chapter 2. Configuring your VersaStack for Windows File Services
51
13.Back in the KVM Console, click Load Driver and then browse to the relevant eNIC
Network driver files (see Figure 2-40 for guidance). Then, click OK.
Figure 2-40 Windows Server drivers selection window
14.The Cisco VIC Ethernet Interface should be listed, see point 2 in Figure 2-41. Click Next.
15.Click Load Driver and then browse to the relevant fNIC Storage driver files (Figure 2-40).
16.The Cisco VIC FCoE Storport Miniport driver listed, see point 1 in Figure 2-41. Click Next.
Figure 2-41 Windows Setup window
52
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
17.A SAN boot LUN is listed in the drive selection window Figure 2-42. Select the LUN, and
click Next. Continue with the Windows Server 2016 installation until the process has
completed, then proceed to the next section.
Figure 2-42 LUN selection window
2.3 Configuring Windows Server 2016
This section describes steps to configure Windows Server failover clustering. The assumption
for this section is that you successfully installed Windows Server 2016 onto your hosts,
including all relevant Cisco drivers, and added your hosts to your Windows domain.
2.3.1 Checking that the VIC drivers are installed
Before this paper describes failover cluster configuration, you should verify that the eNIC and
fNIC drivers that you loaded as part of the Windows Server installation are correctly installed.
Check for the VIC drivers in Device Manager. Figure 2-43 on page 54 shows that the drivers
are installed.
Chapter 2. Configuring your VersaStack for Windows File Services
53
Figure 2-43 indicates the Cisco fNIC and eNIC VIC drivers are installed
2.3.2 Adding roles and features
Use the Add Roles and Features wizard to add required roles and features that enable File
Service on VersaStack V5030. Some or all of these roles and features will be installed:
򐂰 Multipath I/O (MPIO)
򐂰 Failover clustering
򐂰 File Service
Complete these steps:
1. Start Server Manager and select the Local Server → Manage → Add Roles and
Features (Figure 2-44).
Figure 2-44 Select the Add Roles and Features windows
2. Select role-based or feature-based installation. Click Next.
3. Select your local server from the server pool. Click Next.
4. In the Roles panel of the wizard, select File and Storage Services (Figure 2-45 on
page 55) and then click Next.
54
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Figure 2-45 Adding server roles to enable File Services
5. In the Features panel, select Failover Clustering and Multipath I/O (Figure 2-46). Click
Next and then Install.
Figure 2-46 Features selection
6. After the installation completes, restart the host as requested.
Chapter 2. Configuring your VersaStack for Windows File Services
55
2.3.3 Installing the IBM Subsystem Device Driver Device Specific Module for
Windows Server 2016
The IBM Subsystem Device Driver Device Specific Module (SDDDSM) provides multipath I/O
(MPIO) support based on the MPIO technology of Microsoft. SDDDSM is a device-specific
module designed to provide support for supported IBM storage devices.
Information: Current versions and documentation are available in the SDDDSM web page
of IBM Support.
Open the SDDDSM web page, and use the following steps to download and install SDDDSM:
1. Download and extract the SDDDSM file on your Windows host.
2. Right-click the setup.exe file and run as administrator; follow the instructions and use
Figure 2-47 for guidance.
Figure 2-47 shows the installation of the IBM SDDDSM
3. When the installation completes, you are asked if you want to reboot. If you answer y, the
setup.exe program will restart your Windows host.
Path probing and reclamation is provided by MPIO and SDDDSM.
For SDDDSM, the interval should be set to 60 seconds. You can change this by modifying the
following Windows system registry key:
HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\PathVerificationPeriod = 60
Information: IBM Knowledge Center has the latest Windows Server 2016 configuration
settings that relate to particular IBM Spectrum Virtualize code releases. See the IBM
Storwize V5000 web page in IBM Knowledge Center.
56
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Verify the configuration
Complete these steps:
1. On the Windows host, select Start → Subsystem Device Driver DSM → Subsystem
Device Driver DSM.
2. Enter datapath query adapter command and press Enter. The output includes
information about all the installed adapters. Figure 2-48 shows the output: two HBAs
installed and they are configured correctly.
Figure 2-48 Shows the output of datapath query adapter command
2.3.4 Configuring failover cluster
This section shows how to configure the failover cluster.
Validate the cluster configuration
Before you create the failover cluster, validating your configuration is strongly suggested.
Validation helps you confirm that the configuration of your servers, network, and storage
meets the requirements for failover clusters. Complete these steps:
1. Start the Server Manager and select the Local Server → Tools → Failover Cluster
Manager (Figure 2-49).
Figure 2-49 Server Manager Window
2. In the Failover Cluster Management window, select Action → Validate Configuration.
Chapter 2. Configuring your VersaStack for Windows File Services
57
3. Follow the instructions in the wizard to specify both your Windows host servers and the
tests (a good approach is to run all tests). The wizard then guides you to run the tests, as
Figure 2-50 shows.
Figure 2-50 Validate a Configuration Wizard window
4. The Summary page opens after the tests complete. On the Summary page, click View
Report to view the test results and make any modifications to you configurations as
required. Figure 2-51 shows an example.
Figure 2-51 Failover cluster validation results window
Create the failover cluster
To complete this task, be sure that the user account that you use for logging on is a domain
user who has administrator rights on all servers that you want to add as cluster nodes:
1. Start the Server Manager.
2. On the Tools menu, click Failover Cluster Manager.
3. In the Failover Cluster Manager pane, under Management, click Create Cluster. The
Create Cluster Wizard opens.
4. On the Before You Begin page, click Next.
58
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
5. On the Select Servers page, enter the fully qualified domain name of your Windows host
servers, and then click Add. Figure 2-52 shows an example. Click Next.
Figure 2-52 Cluster host selection window
6. In the Cluster Name box, enter the name to use to administer the cluster. Click Next.
7. On the Confirmation page (Figure 2-53), review the settings and clear the check box for
Add all eligible storage to the cluster. Details for this is described later. as we tackle this
detail later in the chapter.
Figure 2-53 Cluster wizard confirmation window.
8. Click Next to create the failover cluster.
9. On the Summary page, confirm that the failover cluster was successfully created. Click
Finish.
Chapter 2. Configuring your VersaStack for Windows File Services
59
10.To confirm that the cluster was created, verify that the cluster name is listed under Failover
Cluster Manager (Figure 2-54).
Figure 2-54 Failover Cluster Manager window
2.3.5 Creating shared volumes for File Services on Storwize V5000
After the installation of Windows Server 2016 is completed, the multipath device driver IBM
SDDDSM is installed, and the failover cluster is created, you are ready to present storage to
the Windows servers for use in the File Server cluster.
Create data volumes for File Services using the GUI
Create the associated volumes for Windows, in the Storwize V5000 management GUI:
1. Click the Volumes icon on the navigation dock and select Volumes from the menu.
2. Click Create Volumes (Figure 2-55).
Figure 2-55 Create new volumes for the main storage volumes
60
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
3. In the Create Volumes pane, define the capacity and name, and then click Create and
Map (Figure 2-56).
Figure 2-56 Define the volume attributes
Chapter 2. Configuring your VersaStack for Windows File Services
61
4. Click View more details to monitor the task progress (Figure 2-57). Click Continue when
the task is complete.
Figure 2-57 Task completed
62
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
5. In the Create Mapping window (Figure 2-58), select Host Clusters, select the host cluster
that you defined in 2.1.4, “Creating host clusters and hosts ” on page 30, and then click
Next.
Figure 2-58 Specify the mapping target
After the volume is created, the Volume Mapping window opens. The Volume Mapping
summary window lists existing mappings and highlights the new mapping to be created.
Chapter 2. Configuring your VersaStack for Windows File Services
63
6. Verify that the new mappings are correct and then click Map Volumes (Figure 2-59).
Figure 2-59 Verify the mapping information summary
7. Click View more details to monitor the task progress (Figure 2-60). Click Close when the
task is completed.
Figure 2-60 Task completed
64
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Create shared volumes for File Services using the CLI
To create the host cluster mapping from the CLI, log in to the management CLI for the
Storwize V5000 and run the following command:
svctask mkvolumehostclustermap -hostcluster WindowsFileServers vs02-w2k16-FS-01
Volume to Host Cluster map, id [1], successfully created
To list the host cluster mappings on the CLI, issue the lshostvdiskmap command, as shown
in Example 2-1.
Example 2-1 List host cluster mappings
lshostvdiskmap
id
1
1
2
2
name
vs02-w2k16-02
vs02-w2k16-02
vs02-w2k16-01
vs02-w2k16-01
SCSI_id
0
1
0
1
vdisk_id
1
2
0
2
vdisk_name
vs02-w2k16-02-Boot
vs02-w2k16-FS-01
vs02-w2k16-01-Boot
vs02-w2k16-FS-01
vdisk_UID
60050764009480090000000000000007
60050764009480090000000000000008
60050764009480090000000000000006
60050764009480090000000000000008
IO_group_id
0
0
0
0
IO_group_name
io_grp0
io_grp0
io_grp0
io_grp0
mapping_type host_cluster_id host_cluster_name
private
shared
0
WindowsFileServers
private
shared
0
WindowsFileServers
In this example, notice the mapping_type column, which shows that the SAN-boot volumes
are configured as private mappings to each host, and the new data volume mapping is
identified as shared. If more hosts are added to the host cluster, they automatically inherit
these shared mappings.
2.3.6 Preparing Volumes for use in file shares
When the volume is mapped from Storwize V5000, it must be formatted and made available
to the Windows failover cluster before it can be used in either a file share for general use, or
scale-out file share.
Initialize the volume
Initialize a newly mapped storage volume:
1. Navigate to Computer Management on either of the Windows Server instances.
2. Expand Storage, and select Disk Management (Figure 2-61 on page 66).
Chapter 2. Configuring your VersaStack for Windows File Services
65
Figure 2-61 Disk Management located in Computer Management
3. Identify the new volume to be used for the Scale-out File Server.
Note: If the volume is not listed, verify the SAN zoning and volume mappings to ensure
the volume was correctly presented to the host cluster.
66
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
4. Right-click the volume name and select Online (Figure 2-62).
Figure 2-62 Bring the volume online
5. Right-click again and select Initialize Disk (Figure 2-63).
Figure 2-63 Initialize the disk
Chapter 2. Configuring your VersaStack for Windows File Services
67
6. Select GPT as the partition style (Figure 2-64).
Figure 2-64 Select GPT partition style for GUID Partition Table
7. Right-click the initialized volume and select New Simple Volume (Figure 2-65). Accept all
default values in the volume creation wizard and then click Finish.
Figure 2-65 Create a new volume
You now have a new partition available and ready for use (Figure 2-66).
Figure 2-66 The new volume has been created.
68
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Add volumes to the failover cluster
Complete these steps:
1. Open the Failover Cluster Manager administrative interface. Expand Failover Cluster →
Storage menu (Figure 2-67). Click Disks and then in the Actions menu (top right corner),
click Add Disk.
Figure 2-67 Registering available storage in the Failover Cluster Manager
2. Select the newly created volume from the available list (Figure 2-68).
Figure 2-68 Adding the volume as cluster available storage
Chapter 2. Configuring your VersaStack for Windows File Services
69
3. Right-click the disk and select Add to Cluster Shared Volumes (CSV), as shown in
Figure 2-69.
Figure 2-69 Define disk as a Cluster Shared Volume (CSV)
Create the Scale-Out File Server
Complete these steps:
1. In Failover Cluster Manager, locate and right-click Roles from the navigation tree, and
select Configure Role (Figure 2-70).
Figure 2-70 Configure a new role
70
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
2. Click Next at the wizard welcome window (Figure 2-71).
Figure 2-71 Click next to skip the welcome page
3. In the list of roles, select File Server and click Next (Figure 2-72).
Figure 2-72 Select File Server as a new role
Chapter 2. Configuring your VersaStack for Windows File Services
71
4. Select Scale-Out File Server for application data and click Next (Figure 2-73).
Figure 2-73 Scale-Out File Server for application data
5. Define a network name for the Scale-Out File Server (Figure 2-74). This will be the name
that network clients use to connect and list those available as shared.
The DNS A record will be created for each Windows Server in the Failover Cluster, and
client devices will use DNS network load balancing to distribute the workload evenly
across the available cluster nodes.
Figure 2-74 Define a network name for the file server
72
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
6. Review the Scale-Out File Server details confirmation (Figure 2-75), and then click Next.
Figure 2-75 Confirm the details of the network name
7. Review the summary of the Scale-Out File Server creation (Figure 2-76), and then
click Finish.
Figure 2-76 Summary of the Scale-Out File Server
Chapter 2. Configuring your VersaStack for Windows File Services
73
Create a file share for the Scale-Out File Server
Complete these steps:
1. In the Roles pane of the Failover Cluster Manager window, locate the Scale-Out File
Server from which the new file share will be presented (Figure 2-77).
Figure 2-77 List of roles available in the Failover Cluster
2. Right-click the Scale-Out File Server and then select Add File Share (Figure 2-78).
Figure 2-78 Click Add File Share to create a new share
Note: As the Scale-Out File Server registers the associated DNS entries in Active
Directory, propagating the changes through the network might take some time. If you
see an error when trying to add a new file share, simply wait several minutes and retry.
74
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
3. Select the share profile (Figure 2-79). This example uses the Quick profile but advanced
settings can be configured at a later point.
Figure 2-79 Select a profile for the share
4. Click the Scale-Out File Server and then select the Cluster Shared Volume that was
created in “Add volumes to the failover cluster” on page 69 (Figure 2-80).
Figure 2-80 Click to associate the Scale-Out File Server with the Cluster Shared Volume
Chapter 2. Configuring your VersaStack for Windows File Services
75
5. Specify the share name and description (Figure 2-81).
Figure 2-81 Specify the share name
6. Select these check boxes (Figure 2-82): Enable access-based enumeration and
Enable continuous availability. Then, click Next.
Figure 2-82 Enable share settings
76
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
7. Define the share permissions (Figure 2-83).
Figure 2-83 Specify share permissions
8. Review and confirm the share settings and then click Create (Figure 2-84).
Figure 2-84 Review share settings before clicking Create
Chapter 2. Configuring your VersaStack for Windows File Services
77
9. When the task is completed, the results are displayed (Figure 2-85).
Figure 2-85 The share was successfully created
10.The scale-out file share is now available for use (Figure 2-86).
Figure 2-86 Network clients can now access the file share
78
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
2.3.7 Creating a File Server for general use
To create a File Server for general use, present a volume from the Storwize V5000 storage
system, as described in 2.3.6, “Preparing Volumes for use in file shares” on page 65.
Assuming a dedicated volume is mapped to the Windows host cluster, complete these steps:
1. Expand the Storage menu in the failover cluster, select Disks and click Add Disk
(Figure 2-87).
Figure 2-87 Click Add Disk to allocate storage to the Failover cluster
2. In the list of available disks, select the new volume (Figure 2-88).
Figure 2-88 Select the new volume from the available disks
Chapter 2. Configuring your VersaStack for Windows File Services
79
3. Ensure the volume was successfully registered as available storage in the disk list
(Figure 2-89).
Figure 2-89 The new volume is listed as available storage
4. In the failover cluster inventory, right-click Roles and select Configure Roles
(Figure 2-90).
Figure 2-90 Configure the new File Server for general use
80
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
5. Select File Server from the list of available roles (Figure 2-91).
Figure 2-91 Select File Server
6. In the File Server Type panel, select File Server for general use (Figure 2-92).
Figure 2-92 File Server for general use
Chapter 2. Configuring your VersaStack for Windows File Services
81
7. Specify a network name and IP address for the File Server (Figure 2-93).
Figure 2-93 Specify the name and IP address that clients will use to connect
8. Select the recently added storage volume (Figure 2-94) and click Next.
Figure 2-94 Select the check box that is associated with the new volume
82
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
9. Review the confirmation summary of the File Server role (Figure 2-95), and then click
Next.
Figure 2-95 Review the configuration summary
10.High availability will be configured automatically and the final summary (Figure 2-96). Click
Finish to complete the process.
Figure 2-96 File Server for general use summary
Chapter 2. Configuring your VersaStack for Windows File Services
83
Both File Servers are now listed as failover cluster roles (Figure 2-97).
Figure 2-97 Both File Server roles are now active
Create file shares for File Servers for general use
Create a new file share on the general use File Server:
1. In the Roles view of the Failover Cluster Manager, right-click the associated role and
select Add File Share (Figure 2-98).
Figure 2-98 Add File Share to the File Server
84
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
2. Select the profile for the share (Figure 2-99). Advanced features can be subsequently
configured.
Figure 2-99 Define the share profile
3. Select the server and the storage allocation for the new file share (Figure 2-100), and then
click Next.
Figure 2-100 Define the underlying storage for the file share
Chapter 2. Configuring your VersaStack for Windows File Services
85
4. Specify the share name and description (Figure 2-101), and then click Next.
Figure 2-101 Specify share name and description
5. Configure the share settings, for example, enable access-based enumeration and
continuous availability (Figure 2-102).
Figure 2-102 Configure share settings
86
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
6. Configure the share permissions (Figure 2-103).
Figure 2-103 Configure share permissions
7. Confirm the share settings as (Figure 2-104), and then click Next.
Figure 2-104 Confirm selections
Chapter 2. Configuring your VersaStack for Windows File Services
87
Progress of the share creation is displayed (Figure 2-105).
Figure 2-105 Share creation progress
8. The general use file share is now available as a network resource (Figure 2-106).
Figure 2-106 File share for general use is available on the network.
The installation is now complete.
88
VersaStack Solution for File Storage Using IBM Storwize V5030 and Windows Server 2016
Back cover
REDP-5442-00
ISBN 0738456314
Printed in U.S.A.
®
ibm.com/redbooks
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising