file area networks

file area networks
Your first look at FAN technology
Copyright © 2007 Brocade® All rights reserved.
Brocade and the Brocade B-wing symbol are registered trademarks of Brocade
Communications Systems, Inc., in the United States and other countries. All
other brands, products, or service names are or may be trademarks or service
marks of, and are used to identify, products or services of their respective
No part of this book shall be reproduced or transmitted in any form or by any
means, electronic, mechanical, magnetic, photographic including
photocopying, recording or by any information storage and retrieval system,
without express prior written permission from Brocade. No patent liability is
assumed with respect to the use of the information contained herein. Although
every precaution has been taken in the preparation of this book, the publisher,
the author, and Brocade assume no responsibility for errors or omissions.
Neither is any liability assumed for damages resulting from the use of the
information contained herein. This material is subject to change without
Brocade Bookshelf TM Series designed by Josh Judd
Introducing File Area Networks
Written by Michael O’Connor and Josh Judd
Edited by Victoria Thomas, Kent Hanson, and Josh Judd
Design and Production by Victoria Thomas
Illustrations by David Lehmann
Printing History
Advance Edition in March 2007
First Edition in May 2007
Published by:
1094 New Dehaven St.
West Conshohocken, PA 19428
Toll-free: (877) BUY-BOOK
Local Phone: (610) 941-9999
Fax: (610) 941-9959
Introducing File Area Networks
Important Notice
Use of this book constitutes consent to the following conditions. This book is
supplied “AS IS” for informational purposes only, without warranty of any kind,
expressed or implied, concerning any equipment, equipment feature, or
service offered or to be offered by Brocade. Brocade reserves the right to
make changes to this book at any time, without notice, and assumes no
responsibility for its use. This informational document describes features that
may not be currently available. Contact a Brocade sales office for information
on feature and product availability. Export of technical data contained in this
book may require an export license from the United States government.
Brocade Corporate Headquarters
San Jose, CA USA
T: (408) 333 8000
Brocade European Headquarters
Geneva, Switzerland
T: +41 22 799 56 40
Brocade Asia Pacific Headquarters
T: +65 6538 4700
Special thanks are due to Mike Klayko and Tom Buiocchi for executive-level
Concepts, diagrams, and chunks of text were adapted from slides, manuals,
brochures, and white papers prepared by Brocade Education, Technical
Support, Professional Services, Engineering, and Marketing. Content was also
adapted from the books “Principles of SAN Design” and “Multiprotocol Routing
for SANs” by Josh Judd. (Both of these are available from “Buy Books On The
Web,”, and from most other major retail book outlets.) Content
was adapted from documentation originally produced by the Brocade Houston
team and other Brocade application product teams. In particular, Rahul Mehta
and his team provided many pages of source content, including diagrams, text,
and lists of key concepts.
This book would not have been possible without reviews from Kent Hanson,
Victoria Thomas, Mike Schmitt, and Martin Skagen.
Finally, the authors would like to acknowledge the hard work of the Brocade
Engineering teams, without whom there would be no need for a book on FAN
technology, as there would be no products about which to write.
Introducing File Area Networks
About the Authors
Michael O’Connor is a Senior Technical Marketing Engineer in Brocade
Technical Marketing. In addition to writing about best practices, he works on
future technologies and proofs-of-concept as well as putting out fires. He also
works with SEs and end-users world wide.
Before working at Brocade, Mike spent ten years at Sun Microsystems in the
network storage group. He also has many technical certifications, and a
couple of advanced college degrees. If you ever go to his “cave”, you will see
that he has his own ISP setup right next to the TV.
To Mom, Jordan, Croft, Miss B. and Tuhan
Josh Judd is a Principal Engineer in Brocade Technical Marketing. In addition
to writing, he provides support for roadmap activities, develops new product
requirements, and works directly with systems engineers, OEM partners and
end users worldwide.
When he first went to work for Brocade, Josh was the company’s senior IT
technical resource, responsible for the architectural design of all network,
server, and desktop infrastructure worldwide and escalations. His previous
experience, degree, and certifications are IT-related. He was the first Senior
SAN Architect at Brocade, which means that he was one of the first full-time
professional Fibre Channel (FC) SAN designers in the world.
About the Book
This book contains information about File Area Networks (FANs) in general,
and specific information about FANs built with Brocade products. It is also
designed to be useful as a desktop reference for FAN administrators.
Information from many white papers, classes, and the authors’ experience has
been combined to create this work.
This book is appropriate for:
• Administrators responsible for or deploying FANs
• Systems Engineers who design and deploy FANs
• OEM personnel involved in selling or supporting FANs
• Analysts needing an understanding of the FAN market
• Network Engineers wishing to expand their skills
We welcome your comments on this book. Send feedback to and include the book title, edition, and publication
date. If applicable, include the page number and paragraph to which each
comment applies.
Introducing File Area Networks
Chapter 1: FAN Basics ...............................................................................1
File Area Networks ................................................................................................ 1
FAN Drivers ............................................................................................................ 4
FAN vs. SAN ........................................................................................................... 5
FAN Support Products .......................................................................................... 6
Underlying Network Components ................................................................ 7
RAID Arrays .................................................................................................... 8
FAN Protocols ........................................................................................................ 9
IP and Ethernet ...........................................................................................10
Network File Systems .................................................................................10
Chapter Summary ...............................................................................................13
Chapter 2: FAN Solutions ....................................................................... 15
Storage Consolidation ........................................................................................16
Namespace Globalization ..................................................................................18
Data Migration ....................................................................................................22
Disaster Recover/Business Continuance .........................................................24
WAN Performance Optimization ........................................................................27
Chapter Summary ...............................................................................................29
Chapter 3: Building Blocks ..................................................................... 31
Brocade StorageX ...............................................................................................31
Brocade WAFS ....................................................................................................35
Brocade FLM .......................................................................................................38
Brocade MyView .................................................................................................40
Brocade UNCUpdate ........................................................................................... 41
Brocade Unified FAN Strategy ............................................................................42
Customer Case Studies ......................................................................................43
StorageX for Data Replication and Protection ..........................................43
StorageX and WAFS Working Together ......................................................45
Chapter Summary ...............................................................................................46
Introducing File Area Networks
Chapter 4: Design Considerations ........................................................ 47
Compatibility .......................................................................................................48
Network Topologies ............................................................................................50
Topology Names ..........................................................................................50
Overlay Networks ........................................................................................51
Reliability, Availability, and Serviceability ..........................................................52
Reliability .....................................................................................................52
Availability ...................................................................................................54
Serviceability ...............................................................................................60
Performance .......................................................................................................61
Scalability ............................................................................................................62
Total Solution Cost ..............................................................................................63
WAN .....................................................................................................................63
General Distance Considerations ..............................................................64
Data Migration Considerations ..................................................................65
Disaster Recovery Considerations .............................................................65
Implementation and Beyond ..............................................................................66
Rack Locations and Mounting ...................................................................66
Power and UPSs ..........................................................................................68
Staging and Validation ...............................................................................68
Release to Production ................................................................................69
Day-to-Day Management ............................................................................70
Planning for Troubleshooting .............................................................................70
Chapter Summary ............................................................................................... 71
Chapter 5: Brocade StorageX ................................................................ 73
Deployment Examples ........................................................................................75
Single User Drive Mapping Proliferation ...................................................75
Multi-Department Drive Mapping Inconsistency ......................................77
Drive Mapping and Communications ........................................................78
Infrastructure Changes ..............................................................................79
Storage Optimization ..................................................................................79
Data Lifecycle Management ......................................................................81
Product Architecture ...........................................................................................82
Client/Server Design ..................................................................................83
GNS Deployment ........................................................................................84
Replication Technology ...............................................................................87
Data Migration Tasks ..........................................................................................98
Data Migration Methods ....................................................................................99
Migrations Procedures with Unchanging Structures ................................99
Migrations Procedures with Changing Structures ..................................101
Migrations Procedures with Server Consolidations .............................. 102
StorageX and NetApp Storage Device Integration ......................................... 104
NetApp Filer in the Physical View tree .................................................... 104
NetApp vFiler in the Physical View tree .................................................. 104
Introducing File Area Networks
Troubleshooting ............................................................................................... 109
Installation Problems ............................................................................... 109
Namespace Problems ............................................................................. 110
Replication Problems .............................................................................. 110
Chapter Summary ............................................................................................ 111
Chapter 6: Brocade WAFS ....................................................................113
WAFS Business Case ....................................................................................... 114
Challenges to Centralization ........................................................................... 115
WAN Latency and Protocol Design ......................................................... 115
Lack of Bandwidth ................................................................................... 115
Lack of Data Integrity .............................................................................. 115
Residual Branch Office Servers .............................................................. 116
Summary: Workarounds Don’t Work ...................................................... 116
Brocade WAFS Architecture .............................................................................117
Node Types ................................................................................................117
Core Technology ....................................................................................... 118
Performance Architecture and Benefits ................................................. 120
Availability and Integrity Benefits ................................................................... 122
End-to-End Security ................................................................................. 122
Full CIFS Disconnection Support ............................................................ 123
Other Architectural Benefits ............................................................................ 125
Single Instance Storage .......................................................................... 125
Transparent Pre-Population .................................................................... 125
Edge Office IT Services .................................................................................... 125
Print Services ........................................................................................... 126
Domain Controller Services .................................................................... 126
Network Services ..................................................................................... 127
Web Caching Services ............................................................................. 127
Management Services ............................................................................ 127
Protocol Acceleration ....................................................................................... 128
TCP Acceleration ...................................................................................... 128
WAFS Transport Protocol Acceleration ....................................................131
Data Reduction ................................................................................................ 133
Data Compression ................................................................................... 134
Application-Aware Data Reduction ......................................................... 135
Deploying WAFS ................................................................................................137
At the Data Center ....................................................................................137
At the Branch Office .................................................................................137
High Availability Deployments ................................................................. 140
Flexible Platform Options .........................................................................141
Platform Hardware ................................................................................... 143
Chapter Summary ............................................................................................ 144
Introducing File Area Networks
Chapter 7: Brocade MyView .................................................................145
System Requirements ..................................................................................... 148
MyView Server and Client Requirements ............................................... 148
MyView Database Requirements ............................................................ 148
MyView Disk Space Requirements ......................................................... 148
Namespace Design Considerations ............................................................... 149
Reduce Complexity .................................................................................. 149
Consider Reliability .................................................................................. 149
Cascaded Namespaces .......................................................................... 149
DFS Namespace Sizing ............................................................................151
Namespace Implementation ...........................................................................151
Chapter Summary ............................................................................................ 152
Chapter 8: Brocade FLM .......................................................................153
Effect of Deploying FLM .................................................................................. 155
FLM In Action ................................................................................................... 155
FLM Policies ..................................................................................................... 158
FLM and Backups ............................................................................................ 159
FLM Deployment Tips and Tricks .....................................................................161
When to Deploy FLM ................................................................................161
General Tips ............................................................................................. 162
Performance and Scalability Tips ........................................................... 163
Setting the Offline Attribute .................................................................... 166
Auto-Remigration and Auto-Exclusion .................................................... 166
Security Tips ............................................................................................. 167
Implementation and Management ........................................................ 167
Data Availability and Recovery ................................................................175
Communicating with Users Before Rollout ............................................ 181
Chapter Summary ............................................................................................ 182
Chapter 9: Brocade UNCUpdate ..........................................................183
System Requirements ..................................................................................... 184
How It Works .................................................................................................... 184
Deploying UNCUpdate ..................................................................................... 185
Command-Line Interface ........................................................................ 185
The Desktop ............................................................................................. 185
Search and Modify Modes ...................................................................... 186
The Directives File ................................................................................... 186
Templates ................................................................................................. 188
Reports ..................................................................................................... 188
Important Notes ....................................................................................... 189
Troubleshooting ............................................................................................... 189
Chapter Summary ............................................................................................ 190
Introducing File Area Networks
Appendix A: Reference .........................................................................191
Ethernet and IP Network Equipment .............................................................. 191
Ethernet L2 Edge Switches and Hubs ............................................................ 191
IP WAN Routers ........................................................................................ 191
Storage Equipment .......................................................................................... 192
RAID Arrays ............................................................................................... 192
Appendix B: Namespace Requirements ............................................195
Appendix C: WAFS Sizing Guidelines ..................................................197
Supported TPA Connections with WAFS 3.4 ...................................................197
Core Node .................................................................................................197
Edge Node .................................................................................................197
WAFS Sizing Guidelines for CIFS ..................................................................... 198
Core Node ................................................................................................ 198
Edge Node ................................................................................................ 198
WAN Throughput ...................................................................................... 198
Glossary .................................................................................................. 199
Introducing File Area Networks
Introducing File Area Networks
Figure 1. High-level view of a FAN ...................................................................... 3
Figure 2. FAN and SAN architectures in concert ............................................... 6
Figure 3. White space utilization in a DAS environment .................................16
Figure 4. Network file-system mapping ...........................................................19
Figure 5. Global namespace .............................................................................20
Figure 6. Multi-site GNS Scenario ....................................................................21
Figure 7. DR failover using a FAN overview .....................................................25
Figure 8. DR failover using a FAN (detailed) ....................................................26
Figure 9. High-level WAFS architecture ............................................................29
Figure 10. Brocade WAFS Appliance ................................................................35
Figure 11. Brocade Unified FAN Strategy ........................................................43
Figure 12. Acme’s file data replication and protection strategy ....................44
Figure 13. ABC’s enterprise data consolidation and protection strategy ......46
Figure 14. Horizontal redundant relationships ...............................................57
Figure 15. Bad rack implementation (airflow problem) ..................................67
Figure 16. Too many drive mappings ...............................................................75
Figure 17. Drive mappings consolidated .........................................................76
Figure 18. Inconsistent mappings ....................................................................77
Figure 19. Inconsistent mappings and communications ..............................78
Figure 20. Migrating to higher-capacity storage ..............................................79
Figure 21. Optimizing capacity utilization with StorageX ................................80
Figure 22. Data lifecycle management concept .............................................81
Figure 23. StorageX applications .....................................................................82
Figure 24. Client/server architecture ...............................................................83
Figure 25. Namespace Creation Wizard ..........................................................85
Figure 26. StorageX administration console ...................................................86
Figure 27. Creating a DFS structure .................................................................87
Figure 28. Replication technologies .................................................................88
Figure 29. Replication engine locations ..........................................................89
Figure 30. Data integrity options for replication ..............................................90
Figure 31. BFDR methodology ..........................................................................91
Figure 32. Heterogeneous replication .............................................................92
Figure 33. Managing replication ......................................................................93
Figure 34. Deleting orphans .............................................................................97
Figure 35. Categories of migration: change or no change .............................99
Introducing File Area Networks
Figure 36.
Figure 37.
Figure 38.
Figure 39.
Figure 40.
Figure 41.
Figure 42.
Figure 43.
Figure 44.
Figure 45.
Figure 46.
Figure 47.
Figure 48.
Figure 49.
Figure 50.
Figure 51.
Figure 52.
Figure 53.
Figure 54.
Figure 55.
Figure 56.
Figure 57.
Figure 58.
Figure 59.
Figure 60.
Figure 61.
Figure 62.
Figure 63.
Figure 64.
Example of changing structure migration ................................... 101
Migration using consolidation root .............................................. 103
Configuring filer management in StorageX ................................. 105
Configuring volumes on a filer ..................................................... 106
Snapshot schedule initiated from StorageX ............................... 107
Snapshot scheduling properties and NetApp filer scheduling .. 108
WAFS product architecture .......................................................... 117
Exemplary WAFS deployment ...................................................... 119
Write-back file locking architecture ............................................. 121
WAFS security advantages .......................................................... 123
Using WAFS to consolidate services in a branch office ............. 126
TCP performance in a LAN vs. a WAN ......................................... 129
TCP protocol acceleration impact on applications ..................... 130
CIFS over a WAN without acceleration ........................................ 132
CIFS over a WAN with acceleration ............................................. 132
Impact of acceleration on file save time ..................................... 133
Dictionary Compression ............................................................... 135
Mapping Drives with WAFS .......................................................... 138
WAFS plus WCCP Redirection ...................................................... 139
MyView Desktop ........................................................................... 147
Inactive vs. active data ................................................................ 154
Impact on file storage utilization of deploying Brocade FLM .... 155
How FLM works ............................................................................ 156
User view of relocated files .......................................................... 157
FLM Policy Engine ........................................................................ 158
FLM impact on backups ............................................................... 159
Brocade UNCUpdate client, Configuration tab ........................... 185
Tasman Networks WAN Router ................................................... 192
Foundry Networks Modular Router ............................................. 192
Introducing File Area Networks
FAN Basics
This chapter covers some of the basics upon which the remainder of
the book is built. This includes a discussion of what File Area Networks
(FANs) are, why they are beneficial, and some of the protocols and
products that can be used to create the underlying FAN infrastructure.
For readers who are familiar with these concepts, this chapter provides an review. and includes the following:
“File Area Networks” on page 1
“FAN Drivers” on page 4
“FAN vs. SAN” on page 5
“FAN Support Products” on page 6
“FAN Protocols” on page 9
“Chapter Summary” on page 13
File Area Networks
A large and growing percentage of corporate data takes the form of
files. This includes unstructured data organized within a file system.
File data is generally different from “raw” block data, which might be
used for a back-end database on a business system or enterprise email server.
All file data is ultimately stored as block data on the other side of a file
system, whether it is located on a PC, a server, or a Network-Attached
Storage (NAS) appliance. All files are blocks, although not all blocks
are files. The key differentiator is that file data is accessed as a file by
the end user—such as text documents or slide presentations. Block
data is accessed as “raw” blocks, generally by an application such as a
database, and not by an end user.
Introducing File Area Networks
Chapter 1: FAN Basics
Given the growth in the number of files that IT departments need to
manage, the increasing complexity of file systems, and the large-scale
applications that use files, it is clear why file management has gained
prominence. This is especially true when implementing applicationlevel disaster recovery and Information Lifecycle Management (ILM).
Such initiatives have created the need for a new type of file management: the File Area Network.
The FAN concept requires an increasingly sophisticated suite of file
management technologies, including file-level descriptions and classification to attach policies to file data. To address this need, the
industry is developing a wide range of products and services that help
streamline the management of file-based data as part of an enterprise
FAN. This book focuses on the FAN products offered by Brocade Communications Systems, Inc.
The network portion of a FAN is the pre-existing corporate IP network.
In addition, a FAN makes use of one or more upper-layer network file
system protocols such as the Network File System (NFS) and the Common Internet File System (CIFS). The FAN, however, is distinguished
from the underlying network that transports it: the term “FAN” is a logical way to describe a holistic approach to implementing file-based
data connectivity, storage, and management. This is similar to other
layered network models. For example, storage networking traffic can
traverse Fibre Channel (FC) fabrics, Dense Wavelength Division Multiplexing (DWDM) Metropolitan Area Networks (MANs), and IP Wide Area
Networks (WANs); yet the Storage Area Network (SAN) is still the SAN
even when it sits on top of something else.
The goal of a FAN is to provide a more flexible and intelligent set of
methods and tools to move and manage file data in the most costeffective and controlled manner. To accomplish this, FANs provide several key functions:
Enterprise-wide control of file information, including the management of file attributes
The ability to establish file visibility and access rights regardless of
physical device or location
Non-disruptive, transparent movement of file data across platforms and/or geographical boundaries
Introducing File Area Networks
File Area Networks
The consolidation of redundant file resources and management
The ability to support file data management in both data centers
and branch offices
A high-level view of a FAN is illustrated in Figure 1.
Figure 1. High-level view of a FAN
The key to this architecture is that the FAN provides “coalescence”
between files stored in different locations and file consumers (clients).
In current data centers, separate storage devices are truly separate,
which is why administrators typically spend so much time mapping
drive letters and handling complex change control processes during
migrations. The coalescence principle means that a FAN groups separate components into one united file space. There are a number of
administration objectives that are facilitated by this:
Make file location and movement transparent
Centralize file storage and management for efficiency
Reduce the cost of remote data backup
Intelligently migrate files based on policies
Consolidate branch office IT infrastructure
Comply with regulations and corporate objectives
Introducing File Area Networks
Chapter 1: FAN Basics
To meet these objectives, many products and services operate in the
FAN, such as a namespace unifier, file routing engines, metadata management, and remote office performance optimizers. This book
discusses those products and services, how they work, and how best
to deploy them.
FAN Drivers
Before discussing how FAN technology works, it is useful to understand why FAN technology is needed. As the amount of file data in the
enterprise grows exponentially year over year, as it has over the past
few years, the need for file networking also increases to meet the following challenges.
Storage Management. Where does data reside? How does it get
moved? How do people find it after it gets moved? With data spread
across potentially hundreds of locations in an enterprise, how do IT
departments manage drive letter mappings? These and other issues
add an enormous amount of complexity to managing a storage environment. Administrators need to decide how to automate and use
polices to manage storage infrastructure, and ideally find a way to
manage Direct-Attached Storage (DAS), NAS, SAN, and the files stored
on those devices from a single location.
Storage Consolidation. It is desirable to consolidate many scattered
storage resources into fewer centralized locations. Indeed, shared
storage will have an increasingly important role in driving next-generation efficiencies across the enterprise to simplify management and
optimize white space—the portion of a given disk that is not used for
storing data. A very large portion of the data being consolidated consists of files, so an optimal consolidation solution must be intelligent
at the file level.
Business Continuity/Disaster Recovery. More and more organizations
are preparing for disasters, in many cases driven by laws and regulations, and in other cases by fiduciary duty to investors. One goal of
solutions in this category is to minimize client downtime during an outage, but in all cases it is necessary to ensure the availability of
mission-critical data. Given that much, if not most, of the data being
protected consists of files, this is a natural fit for FAN technology.
Storage Performance Optimization. When too many users access a
single device, performance degradation inevitably results. To solve the
problem, it is desirable to load balance across multiple storage
Introducing File Area Networks
devices. However, this can be tricky and time consuming to accomplish
without automation, and doing it at all requires knowledge of file
access patterns.
Data Lifecycle Management. Data Lifecycle Management (DLM)
requires a method for creating storage tiers and aligning archival policies with regulatory compliance requirements. Additional issues
include how to streamline the backup process and how to optimize the
use of high-end storage subsystems. Since: most of the data being
managed consists of files, most of the knowledge about which bits of
data need to “live” on what storage subsystems can be obtained only
by looking at file-level properties. Merely looking at data blocks on a
raw device will not help.
Remote Site Support. Managing remote site primary storage and
backup can be a full time role. Traditional methods of centralizing
management of highly distributed data can be equally problematic, for
example, in terms of performance and availability. File-level tools are
needed to centralize data for ease of management, while maintaining
performance and file availability for remote users.
Data Classification and Reporting. Knowledge is power. Discovering
storage capacity utilization and determining the business value of data
is necessary in order to make good decisions about where to put data,
how to protect it, and what kinds of new storage devices to purchase.
As in other areas, most of the knowledge needed to classify data and
most of the information needed in reports are related to the file-level
properties of the data.
This section illustrates some of the reasons why it is necessary for
storage managers to move up the protocol stack all the way to the file
level. It also explains the source of a common misconception: that File
Area Networks are a technology in opposition to Storage Area
In reality, the two technologies are more than complementary; they are
actually symbiotic. SANs are a requirement for the most robust FAN
solutions, and FAN solutions consist of tools that SANs simply cannot
provide. The fact that FANs make management easier at the file level
allows for continued growth in data on the underlying storage subsystems … which are usually SAN attached. It is important to
remember this: all file data is ultimately stored in block format, and
block data is optimally stored on a SAN.
Introducing File Area Networks
Chapter 1: FAN Basics
Figure 2. FAN and SAN architectures in concert
FAN Support Products
FAN technology sits on top of other network infrastructure. Broadly
speaking, there are seven major components to a FAN solution. Refer
to Figure 2 while reading this list and notice where each element sits
in the diagram:
Clients that access files
Connectivity between clients and the file servers, which also
allows clients to access namespace services
Policy-driven file management and control, to align file locations
and properties with business requirements
A namespace, with associated software and hardware to serve
and maintain it
Devices to serve files, for example, NAS heads and file servers
File systems residing on these servers
Back-end storage devices, with an optional SAN
Each of these elements requires underlying support technology. This
section discusses some of those technologies. Many comprehensive
books have already been written about networking, and most readers
are already familiar with networking products. Therefore this section
provides only a brief review and discussion of how these products
relate to FAN architecture specifically.
Introducing File Area Networks
FAN Support Products
Underlying Network Components
A network hub or switch allows connectivity between its ports such
that any port can “talk” to any or all of the other ports. Switches and
hubs are both “Layer 2” devices (L2) in IP terminology. For example, in
an IP/Ethernet network, switches operate at the Ethernet layer, which
is the second layer in the Internet Protocol layered model.
Switches are distinguished from hubs such that switches do not have
a “shared bandwidth” architecture. In a hub configuration, if two ports
are talking to each other, it precludes other ports from talking at the
same time. There is only one port of bandwidth, which is shared
between all nodes. If a pair of devices talk full speed to each other on
a hub, other devices can be precluded from talking at all until they are
finished. On the other hand, connectivity on a switch is allowed regardless of activity between any unrelated pair of ports. It should not be
possible for one I/O pattern on a switch to “starve” another for bandwidth. This is one reason why Fibre Channel switches were successful
in the SAN marketplace and FC-AL hubs quickly became obsolete: it is
unacceptable for a host to be denied access to its storage for any
length of time, and this happens more often than not with hubs.
A router is similar to a switch in that it provides potential data paths
between all of its ports, but a router operates at a higher layer in the
protocol stack. If the switch operates at the Ethernet layer (L2), then a
router operates at the IP layer (L3). This allows a router to connect
autonomous or semi-autonomous network segments together in a
hierarchical structure, rather than a flat one.
Historically, routers were much slower than switches and were not
available for high-performance applications. In fact, many IP routers
were implemented in software rather than hardware. Modern IP networks generally use Layer 3 switches, which combine the hardwareaccelerated speed of Layer 2 switching with the intelligence of Layer 3
routing in a single integrated platform.
In a SAN, the reliability and performance requirements for switches
and routers are strict. The network is expected to deliver every frame
without fail or delay except under rare and extreme circumstances,
and to deliver all frames in order under virtually every condition. This is
because the nodes and applications attached to a SAN were designed
for direct attachment, where delay, out-of-order delivery, and frame
loss simply do not occur. Any working SAN must make storage look as
if it were directly attached from the point of view of each host, so that
the Small Computer Systems Interface (SCSI) protocol layer can work
exactly the same way for SAN as it does for directly attached storage
Introducing File Area Networks
Chapter 1: FAN Basics
devices. As a result, hubs are almost never suitable for SAN use, and
the vast majority of production SANs use Fibre Channel fabrics rather
than IP/Ethernet. For production SANs, the breakdown at the time of
this writing is more than 99% FC versus less than 1% IP.
In contrast, network file system protocols such as NFS and CIFS were
designed with the assumption that the network could drop perhaps 1%
of packets on a regular basis—since IP networks often did that until
very recently—and are still far less reliable than FC fabrics. In addition,
the protocol designers assumed that performance on IP networks
would be erratic and low compared to direct-attached storage, again,
because IP networks behaved that way when the protocols were
designed. With upper-level protocols architected to compensate for
unreliable and slow underlying infrastructure, the requirements for
transport are comparatively relaxed. It is therefore not surprising that
the protocol of choice for FAN transport is IP, usually over Ethernet.
Indeed, FANs are almost always built on top of the existing commodity
Ethernet gear already in place in an enterprise, rather than the
switches and routers built for block storage requirements. The breakdown of Ethernet versus FC deployments for network file systems is
exactly the opposite of the breakdown for block-level storage networks.
The conclusion of this discussion is that most FAN deployments will
make use of IP/Ethernet gear. The network storage devices on the FAN
will always have a block-level back-end, which will often be connected
via a SAN. In that case, the back-end will almost always be Fibre
RAID Arrays
Redundant Array of Independent Disk (RAID) subsystems have a set of
physical disks, which are “hidden” behind one or more RAID controller
interfaces. The controllers present hosts with logical volumes that do
not need to map directly to the physical disks. That is, the “picture” of
the storage looks different to a host versus the disks which are physically present in the RAID array. They group together physical disks to
form logical volumes. This can be as simple as concatenating disks
together so that many small disks appear to be few large volumes, or
can involve complex layouts with redundancy and performance
RAID arrays form the bulk of mass storage for the back-end of FAN
solutions due to their high degree of configurability, enterprise class
performance, and high availability options.
Introducing File Area Networks
FAN Protocols
RAID arrays are block devices. In a FAN, a RAID array must have some
sort of network storage front-end processor: a device that takes the
raw block-level data on the RAID volumes, configures it as a “cooked”
file system, and presents that file system to a network interface using
a protocol such as NFS or CIFS. This could be a general purpose server
running a protocol stack, or special purpose appliance hardware. In
some cases, a RAID array will be built into the same platform as the
network storage processor. In larger-scale solutions, RAID arrays and
separate network storage processor nodes will be co-located on a
Fibre Channel SAN, which will provide block-level any-to-any connectivity between the processors and arrays. This solution offers the best of
both the block and file networking architectures, and is expected to be
the norm for enterprise-class FAN deployments. See Figure 2 on
page 6 for an example.
Of course, any storage device could be used on the back-end of a FAN.
This includes S-ATA drives inside servers, JBODs, tapes, solid state
media, and so forth. A detailed discussion of storage technology is
beyond the scope of this work. See the book “Principals of SAN
Design” for more information on this topic.
FAN Protocols
The products discussed in the previous section rely on a protocol, or
rather on several protocols in combination. FAN designers must be
familiar with the characteristics of each protocol option when selecting
equipment and planning for performance, availability, reliability, and
future extensibility.
Protocols are behaviors that computers and network devices must follow in order to communicate. If devices on a network do not use the
same protocols, they cannot communicate. Imagine a person who
speaks only English trying to have a complex philosophical debate with
another person who speaks only Swahili. Indeed, it is often hard
enough to have a conversation if one person speaks American English
and the other learned English in the UK. Or Spanish spoken in Mexico
versus Spain spoken in Europe. Similarly, network devices must speak
the same language (for example, English) and use the same unofficial
variations (for example, American English). This means that industrywide agreement is required on both “official” standards, and “de
facto” standard implementation details.
Protocols apply at all levels of communication, from physical media
and cabling all the way up to the application level. Many protocols at
many levels are usually involved when two devices communicate. The
Introducing File Area Networks
Chapter 1: FAN Basics
entire group of protocols is collectively referred to as a “protocol
stack”. Every piece of the stack must work properly for communication
to occur.
This section discusses some of the protocols that are relevant to file
networking today, focusing on networking protocols such as IP, Ethernet, Fibre Channel, NFS, and CIFS. The discussion starts at the lowest
level of the FAN stack and then works its way upwards.
IP and Ethernet
Internet Protocol (IP) is the standard for communication on the Internet and the de facto standard in corporate LANs for applications such
as e-mail and desktop Web servers. It is also the protocol of choice for
the front-end component of file networks.
In most Local Area Networks (LANs), IP is carried over Ethernet. Upperlevel protocols such as NFS and CIFS are mapped on top of IP, usually
with Transmission Control Protocol (TCP) in between for error detection. An IPv4 address consists of four bytes, usually represented in
decimal format and separated by dots. For example, “” is
a standard format IP address.
There are advantages to IP when it is used in the way its designers
intended. For example, IP was designed to support very large, widely
distributed, loosely coupled solutions such as the Internet, and is
therefore optimized to solve this type of design problem. The specifications for IP mandated a loose coupling between IP subnets as the
most important design criteria. A given connection was considered
expendable as long as the overall network remained online. Fibre
Channel, in contrast, was designed with support for high-performance,
mission-critical storage subsystems as the most important factor. It
did not need to be as scalable but it did need to be extremely fast and
reliable compared to IP. Since upper-layer FAN protocols were
designed to use IP as a transport, the reliability and performance
issues inherent in IP do not pose the same challenges for FAN frontends as for SAN back-ends.
Network File Systems
There are a number of options available to map raw block data on a
disk onto a cooked file system format. In Windows, New Technology
File System (NTFS) and File Allocation Table (FAT) are examples; in
UNIX, there are many more options: XFS, UFS, VxFS, and so on.
Similarly, there are many options for mapping a cooked file system
onto a network. However, two options dominate existing network file
systems: NFS and CIFS. This book will focus on NFS and CIFS.
Introducing File Area Networks
FAN Protocols
The Network File System was developed by Sun Microsystems in the
1980s. It was the first widely deployed network file system. Today, it is
the most typical choice for UNIX systems, although it can also be used
for PC platforms with third-party software.
In an NFS environment, one machine (the client) requires access to
data stored on another machine (the server). The server can be a UNIX
host, a Windows server running third-party software, or an appliance.
The server runs NFS processes, either as a software-only stack (for
example, a daemon in UNIX) or as a hardware-assisted stack (for
example, a chip in an appliance). The server configuration determines
which directories to make available, and security administration
ensures that it can recognize and approve clients. The client machine
requests access to exported data. If the client is a UNIX machine, this
is typically done by issuing a mount command. Once the remote file
system is mounted, users can access the remote files as if they were
located on the client machine itself.
When designing a FAN with NFS, it is important to consider a number
of limitations and caveats. For example, NFS version 2 supported mapping only over User Datagram Protocol (UDP), not TCP. UDP over IP is a
stateless and comparatively unreliable option. TCP over IP is still far
less reliable than Fibre Channel, but it does deliver enough reliability
to allow NFS to work in a more scalable and flexible manner.
NOTE: NFS version 3 or higher is required for TCP support.
It is also necessary to consider access control limitations. NFS does
not provide a robust mechanism for granular definition of file access
privileges. This can be a particularly challenging issue when implementing a FAN with both NFS and CIFS. Using NFS version 4 will
address these limitations, but at the time of this writing, NFS version 4
is not widely deployed.
If you are interested in details of the NFS protocol implementation,
look up the following protocol specifications: RFC 1094, RFC 1813,
and RFC 3530.
Introducing File Area Networks
Chapter 1: FAN Basics
The Common Internet File System was originally known as the Server
Message Block (SMB) protocol. It is a proprietary protocol developed
by Microsoft, used mainly for communications between Microsoft platforms and network storage devices. It can also be used for sharing
printers, serial ports, and other communications, but for the purposes
of FAN deployments, only the file sharing aspects are relevant.
It is possible to access CIFS file systems from UNIX platforms using
third-party software, and to present CIFS file systems from UNIX servers for use by PC clients in a similar manner. Because CIFS is
proprietary, both approaches rely on reverse engineering and have significant limitations and caveats.
Like NFS, CIFS was not originally written for TCP/IP. In fact, CIFS was
not originally written for IP at all: it was written for NetBIOS which run
on top of NetBEUI, IPX/SPX, or TCP/IP. In enterprise-class deployments
today, CIFS is almost always mapped directly on top of TCP/IP, and for
the vast majority of FAN solutions, this is the optimal approach.
NFS and CIFS Challenges
Both NFS and CIFS have a common set of challenges that relate to the
need for FAN technology.
For example, neither works well in a WAN environment. High latency
connections between clients and servers reduce performance and reliability. This has resulted in a sub-optimal decentralization of resources
in most large-scale environments. Some FAN components are
designed to solve this problem.
Also, both protocols are designed to map the mount point for a remote
file system (that is, the location on the client at which users see the
remote files) to the physical machine that serves the file data. In small
environments this can work well. However, when there are hundreds of
servers, each of which may have to be swapped out periodically, it can
be extremely difficult to manage mount point mappings. Addressing
this issue and its related problems is a cornerstone of the FAN model.
Finally, it is important to reiterate that both NFS and CIFS are cooked
file systems and that all files ultimately reside in raw block format on
storage. It is therefore necessary for designers to consider the architecture of the back-end block solution as well as the FAN. For this
reason, FAN designers should consult with SAN professionals. See the
books “Principals of SAN Design” and “Multiprotocol Routing for SANs”
for more information on this topic.
Introducing File Area Networks
Chapter Summary
Chapter Summary
FAN applications run on top of NFS and/or CIFS, which usually run on
top of TCP over IP over Ethernet. Through this connection, the applications access files; and the underlying data within the file resides on
the back-end of servers or appliances. That connection, in turn, is generally handled by some form of SCSI mapping, such as SCSI over Fibre
FAN applications are designed to help IT professionals eliminate or at
least reduce the complexity of managing the ever growing amount of
file data in their environments. They can make file location and movement automatic and transparent, centralize resources, consolidate
management functions, reduce costs associated with backup and
recovery, and automate regulatory compliance tasks. The remainder of
this book discusses these FAN applications, how they work, and how to
deploy them effectively.
Introducing File Area Networks
Chapter 1: FAN Basics
Introducing File Area Networks
FAN Solutions
This chapter provides an overview of a few of the more popular solutions for which FANs are used and includes the following sections:
“Storage Consolidation” on page 16
“Namespace Globalization” on page 18
“Data Migration” on page 22
“Disaster Recover/Business Continuance” on page 24
“WAN Performance Optimization” on page 27
“Chapter Summary” on page 29
Keep in mind while reading this chapter that it presents a small sample of possible solutions, and that a strategic investment in networking
will result surfacing in new use cases. Even when a FAN is deployed for
one specific application, once connectivity is in place, other applications are likely migrate to the FAN over time.
It is worth noting that the “strategic investment” in FAN is not really an
investment in a network per se. Most companies already have the
underlying network in place, since FANs operate on a standard IP LAN/
MAN/WAN infrastructure. Similarly, most companies already have file
servers and/or NAS storage devices. The investment in FAN technology enhances the existing infrastructure, allowing cost reductions,
manageability improvements, and performance acceleration.
This chapter discusses how FAN solutions enhance an existing network to support the IT goals of the organization deploying them.
Subsequent chapters discuss specific Brocade FAN products, and how
to deploy and manage them.
Introducing File Area Networks
Chapter 2: FAN Solutions
Storage Consolidation
Storage consolidation is the process of combining many scattered
storage resources into fewer centralized resources. This approach provides ongoing manageability benefits as well as direct cost savings.
The primary direct benefit of storage consolidation comes from more
efficient utilization of storage assets: in consolidated environments
there is less unused disk space overall and fewer arrays to buy and
manage. Furthermore, this means using less electricity to power disks,
less air conditioning to cool them, and—as a direct result—fewer carbon credits. Storage consolidation also allows administrators to work
more efficiently.
In DAS environments, each host needs to have its own storage. This
can be internal or external, but it cannot be shared easily with other
hosts or located far away from the host. Because of the potential, or
really inevitability, of unplanned increases in demand, each storage
device in a DAS environment needs to have substantial unused space
(known as “white space”) to allow for growth. Frequent application
downtime, a characteristic of DAS environments, is not usually acceptable, because new storage arrays cannot be added “live” to a DAS
Figure 3 shows how multiple DAS subsystems each have their own
individual white space areas, with different levels of utilization.
Figure 3. White space utilization in a DAS environment
In this diagram, each host has its own storage subsystem, illustrated
by a partially filled cylinder. The level of the fill indicates the level of utilization. The unused space—the average of all hosts—is about equal to
the total used space. This shows a 50% utilization of storage assets
overall, which means that half of the storage investment in this DAS
environment is a non-earning asset: white space “sits on the data center floor,” consuming power and cooling budget, and depreciating.
Introducing File Area Networks
Storage Consolidation
The reason that white space in DAS environments tends to be high is
that there is no way for a host in need of storage to access the white
space located on storage attached to a different host, and therefore
each host needs its own dedicated pool of white space to be sized
according to its projected worst-case need. Most hosts never actually
use that space, but it has to be there to prevent downtime for occasions when it is needed.
In a FAN, however, the major portion of white space can be kept in a
central pool, since any host can access any storage device to get at
free space when it is needed. Any host with an occasional need for
storage can get more out of the central pool on the fly. Some white
space is still needed, but utilization is at a much higher rate, which
means less money spent on non-earning assets.
Not only does the FAN version have higher utilization and therefore
lower cost, but it also consolidates storage subsystems into fewer
devices. Fewer devices to manage means lower power consumption,
less load on the data center cooling infrastructure, reduced carbon
usage, simplified support contracts, more efficient depreciation
cycles, and more efficient use of administrator time.
Because of its compelling and quantifiable value proposition, consolidation is the single most common use case for file networks today.
Designers include storage consolidation as an integral part of most
FAN solutions, even when they have a different primary design
It is not always possible or even desirable to collapse all storage into
one subsystem. Particularly in enterprise environments, there may still
be dozens or even hundreds of arrays. However many subsystems are
required in a storage consolidation solution, it is always fewer than
would have been needed with a DAS approach, in addition to their
greater utilization. Similarly, it is never possible to truly finish a consolidation project unless the organization performing the project stops
growing and stops changing. This means that there will still be multiple
storage subsystems and data will still periodically move between
them, even after a consolidation project is considered to be complete.
So far, this chapter does not describe anything that could not have
been accomplished with traditional technologies. Fibre Channel SANs
in particular have been enabling storage consolidation solutions for
more than a decade. Indeed, the majority of enterprises today have a
storage consolidation approach of one sort or another. On the surface,
it might not appear that storage consolidation is a use case for FAN
Introducing File Area Networks
Chapter 2: FAN Solutions
technology simply because other technologies already solve the problem. That being said, FANs enhance storage consolidation solutions in
several important ways.
For example, non-FAN approaches can experience the administrative
difficulty of adding and removing storage. If hundreds of users are
mapping their “Q:” drive to a particular NAS head, and IT needs to
remove that head, somebody may need to “touch” all of the user systems. At best, login scripts will need to be adjusted and the change will
need to be communicated outside the IT group—which will inevitably
result in user complaints. Migrating data during a consolidation effort
is also problematic for similar reasons. FAN technology provides a set
of tools that fill these and other gaps and provides a complete consolidation solution. The next sections discuss some of the tools FAN
applications use to accomplish this.
Namespace Globalization
The cornerstone of the FAN approach is the use of a Global
Namespace (GNS) to virtualize users’ views of storage servers. In
short, a GNS is to a network file system as Domain Name System
(DNS) is to Internet services such as the World Wide Web. Most people
do not know the numerical IP address of, yet millions
use it every day. Using a GNS makes storage infrastructure similar to
the Internet, that is, users access files in much the same way as they
access Web sites.
To understand the way in which a GNS does this, first it is useful to
consider what IT departments and end users do without a GNS.
Figure 4 on page 19 illustrates the traditional approach to mapping
mount points and drive letters across a network. Each mount point is
“connected” to a physical file server. These relationships are usually
created and maintained by the IT organization. If files are moved, the
mapping will need to be changed on client machines. While there are
methods for automating this, none are problem-free. Typically, users
are required to log off or reboot. Administrators inevitably have to field
questions from confused users, or troubleshoot clients when the automated solution does not work.
Troubleshooting with this approach is notoriously difficult. In a large
environment, there are more file servers than letters in the alphabet.
This creates a problem for Windows administrators: it means that it is
not possible for all clients to have the same drive letter mappings. For
some users, drive “E:” refers to one file server; and for others, it
means something else entirely. When a user calls support and says, “I
Introducing File Area Networks
Namespace Globalization
can’t see my E: drive,” it is difficult for the help desk to help. It is
unlikely that the user knows where “drive E:” is actually mapped, and
equally unlikely that the help desk person can guess.
Figure 4. Network file-system mapping
Instead of mapping mount points to physical NAS heads and file servers, a GNS creates a directory structure that makes logical sense to
users and to the IT department, and maps user requests to the correct
physical location to get the data. It creates a layer of abstraction
between users’ perception of file systems and the physical location of
files. It virtualizes file systems.
Figure 5 on page 20 shows what the network file-system mapping
environment might look like once a GNS is implemented. Under the
non-GNS paradigm, a user would need to know (or guess) what drive
letter was mapped to the server with their data. If something were to
change on the server side, users would need to adapt, and potentially
log off, reboot, and even reconfigure applications. Under the GNS paradigm, users can see an intuitive directory structure underneath a
Introducing File Area Networks
Chapter 2: FAN Solutions
single drive letter. If something changes on the server side, the GNS
layer of abstraction allows the IT department to keep the users’ view of
the file system consistent.
Figure 5. Global namespace
This is much like the way DNS provides a layer of abstraction for the
Internet. The Web site can change IP addresses or
physical locations on a transaction-by-transaction basis without users
needing to know that this is happening. Similarly, when a user in a global namespace environment types in the Universal Naming
Convention (UNC) path name for a file, the physical location of the file
is transparent. GNS provides a directory service analogous to DNS.
Administrators can use GNS to aggregate multiple file systems and
manage them as a single entity, regardless of physical location. For
example, consider a company with two Engineering offices: one in
Houston and the other in London, England. In the Houston office,
there are two storage devices (server1 and server2) and in the London
office, a single storage device It is possible to create a GNS that contains both sites; and in this way, users at both sites use a single drive
Introducing File Area Networks
Namespace Globalization
letter to access all their data, whether it is local or remote. If the
administrator needs to migrate or replicate any of the data, it will be
transparent to the user, as illustrated in Figure 6.
Figure 6. Multi-site GNS Scenario
Notice that the Houston and London engineers see different files
under their E: drives. The administrator of the GNS can set up policies
that define the correct mapping for logical-to-physical directories
based on many different criteria. In this case, the definition was based
on the physical location of the client.
Like storage consolidation, GNS can be a solution in and of itself. It
can solve operational difficulties associated with having too many
drive letter mappings, inconsistent drive mappings, and so on. Simplifying IT operations saves time and money directly by allowing IT
personnel to work on more strategic projects, and indirectly by increasing end user productivity.
Also like storage consolidation, GNS can be part of other solutions. For
example, using GNS will make it very much easier to migrate data from
one array, storage device, or server to another, as discussed in the
next section.
Introducing File Area Networks
Chapter 2: FAN Solutions
Data Migration
Active IT organizations need to migrate data, and in the most active
groups, moving data can be a full-time job, or even several full-time
jobs. Business drivers for data and server migrations include:
Storage arrays coming off lease and/or needing to be upgraded to
newer technology on a continual basis
Servers becoming obsolete or being repurposed
Volumes filling up and needing to be moved
Mergers and acquisitions or internal restructuring that require
combining and/or relocating data centers
Application usage patterns that change over time
Whatever the cause, it is frequently necessary to move large data sets
between NAS filters, general purpose file servers, and arrays. It may
also be necessary to change host-to-storage mappings. Most migrations occur within a data center, but sometimes they occur between
sites. Indeed, migrations can occur within a data center in which the
affected devices are all attached to the same network switch, or
between data centers located thousands of miles apart, crossing
many intermediate network devices and traversing networks running
different protocols. Either way, FANs can help.
In non-FAN environments, migrations are difficult if not impossible to
accomplish without major effort and downtime. Indeed, migrations
can be so complex that a number of consulting and professional services organizations specialize in nothing else.
Issues with traditional migration approaches include:
Non-deterministic performance
Unknown time window for completion
Downtime for users
End user complaints to IT
Complexity of change control plan
Compatibility between heterogeneous systems
Post-migration troubleshooting
Introducing File Area Networks
Data Migration
With FANs, the effort and risk of migration can be minimized, and the
processes simplified even to the point where no downtime is required
at all. FAN components aggregate storage across heterogeneous, geographically distributed file systems, thereby enabling administrators to
perform migration across devices regardless of location, vendor, or
underlying file systems. Administrators can seamlessly migrate and
consolidate data “behind the veil” of the GNS, without interrupting
user access. With the best FAN tools, a simple, intuitive user interface
turns a formerly complex migration job into a drag-and-drop operation,
and data movement tasks can be scheduled and automated by setting
up policies in the application.
GNS makes data movement transparent to users, and this is extremely
valuable, but GNS is not the only FAN component that simplifies migration. FAN solutions also contain tools that actually perform the
migration process. The best FAN tools can make the migration run
faster and the process predictable.
For example, a FAN toolkit might have the ability to perform byte-level
differencing during a migration operation. This would allow the migration tool to move only the parts of files that have changed. If some files
are already on the target array, differencing technology prevents
administrators from the necessity of moving those files again. If the
FAN application is particularly well designed, it could enable the migration of data while users are still accessing the source volume.
Here is the scenario. An administrator starts moving data. Users are
working in the files and make changes in some—but not all—of the files
on the source volume by the time the movement completed. Most of
the files moved to the target will be unchanged during the time it took
to move them. Differencing technology allows the administrator to synchronize the changed files rapidly, and then synchronize the even
fewer changes that occurred during that time. Most migration
approaches can, at best, apply file-level differencing. Using a byte-level
approach means that the subsequent iterations will not need to move
changed files; they will only need to move the changed parts of files,
which is faster. When migrating over a WAN, it is much faster. By iterating through that process, eventually the target would be synchronized
with the source, and the cutover of the GFS pointer would occur
In a similar way, the combination of GFS and optimized data movement technology can be used to create fast and transparent failover in
a recovery solution as discussed in the next section.
Introducing File Area Networks
Chapter 2: FAN Solutions
Disaster Recover/Business Continuance
In the wake of recent global events, corporations and government
agencies alike have a greater focus on Disaster Recovery (DR) and
Business Continuity (BC) solutions. (“DR” will be used to refer to this
general class of solution.) In some cases, DR solutions are driven by
fiduciary duty to investors; in other cases government regulations
mandate their implementation. Whatever the driver, organizations
implementing these solutions need to be able to move large amounts
of block data reliably, quickly, and repeatably over long distances.
FANs are a natural fit for organizations with DR requirements. Traditional approaches to DR have a number of gaps related to replication
or backup performance, manageability, restore times, and failover
times. All of these issues are addressed in comprehensive FAN
For example, look at the multi-site GNS scenario illustrated in Figure 6
on page 21. Houston and London could be configured to work as an
active/active DR pair: so if the Houston site were to fail catastrophically, London could take over, or vice versa. If this were done without a
GNS, the failover operation would have to be manual, and users would
need to have their drive mappings reconfigured. Without other FAN
components, synchronization of the file systems at the two sites at
best would occur at the file level, requiring entire changed files to
move constantly. This means wasted time and bandwidth, and greater
latency between the change and the commitment of the replicated
copy. Restoration would be at the file or even file-system level, which
would require considerable time to move across an intercontinental
On the other hand, if the DR solution used a FAN, none of these issues
would exist. GNS would allow transparent and automatic failover to the
recovery file system at the remote site. Users would not need to
reboot, and applications would not need to be reconfigured. During
steady state operations, only the minimum possible information would
need to traverse the WAN, that is, only the changed blocks of files,
rather than the files themselves. Similarly, after the disaster was over,
switching back to the primary site would be much more efficient,
because only changed blocks would need to be moved.
Introducing File Area Networks
Disaster Recover/Business Continuance
To see how this works, look at Figure 7 and Figure 8 on page 26. (Brocade StorageX, a FAN application, is used in Figure 7; it will be
described in the next chapter.)
Figure 7. DR failover using a FAN overview
Planning. First, pool storage using a global namespace to make file
locations transparent to users and applications.
Readiness. StorageX enables many-to-one replication as well as heterogeneous replication.
Disaster occurs. StorageX policies cease replication and make secondary devices active.
Failover. Automatic failover occurs between the sites and clients are
transparently redirected to the target device.
Figure 7 shows a DR replication scenario between New York and New
Jersey. The GNS abstraction layer sits between the users and the physical file servers in both locations. A FAN application replicates file data
between the two sites, using differencing technology to keep the copies up-to-date without using excessive bandwidth. If one site fails, the
GNS pointers are switches from the primary to the secondary site.
Most users do not even notice the change. Figure 8 illustrates how this
works in more detail.
Introducing File Area Networks
Chapter 2: FAN Solutions
Figure 8. DR failover using a FAN (detailed)
Under normal conditions, a client at Site A accesses file data on its
local file server, and GNS information using its local name server. A
monitoring agent “watches” the servers at both sites, and a replication
agent (RA) keeps them in sync. If the Site A data center were to fail catastrophically, the client configuration would switch over to the Site B
namespace replica. At the same time, the monitoring agent would
notice that the Site A server had failed and would update the
namespace replica to point at the Site B file server.
This category of solution works optimally to solve large-scale DR problems, but it is not necessarily optimal for other WAN extension and
optimization scenarios. Fortunately, other FAN components are available for those scenarios.
Introducing File Area Networks
WAN Performance Optimization
WAN Performance Optimization
A complete FAN solution generally includes a WAN optimization component. For DR solutions, the system described above may be optimal.
However, when it is optimizing branch offices and other smaller sites,
the solution is usually referred to as “Wide Area File Services” (WAFS).
WAFS is designed to overcome the severe performance and infrastructure challenges related to consolidation and file collaboration over the
Driven by cost and compliance concerns, businesses are attracted to
the concept of storage and server consolidation. From an IT perspective, consolidation helps move businesses from existing fragmented
server environments toward a more secure, compliant, and adaptable
FAN architecture. In turn, IT can respond more rapidly and cost effectively to current and future business demands. This is particularly
attractive in a branch office WAN environment. It may not be possible
to have a full IT staff at each branch office. But when there is no IT
presence in the branch office, it is sub-optimal to have non-IT personnel manage file servers and services such as print, Dynamic Host
Configuration Protocol (DHCP), DNS, domain controller, Systems Management Server (SMS), and others.
Business drivers and theoretical improvements for IT present one side
of the argument when consolidation is performed in a WAN environment. Consolidation brings new performance and data growth
challenges to an already overburdened IT infrastructure. Alone, the
challenges facing IT can compromise the benefits of consolidation.
Collectively, they can derail the larger project—effectively undermining
the intent of consolidation.
Most distributed enterprises require file collaboration or use specialized data management applications to ensure that everyone works
from a common base of files. However, user frustration due to fileaccess delays and subsequent user-derived workarounds can severely
undermine business requirements. Similarly, if IT administrators move
data from a remote site to a central data center, access will always be
slower due to the latency and congestion prevalent in WANs.
IP optimization products are not sufficient to accelerate file sharing
traffic, and they introduce data integrity and security issues. These
products employ non-file-aware, approaches, and cannot support endto-end security mechanisms such as SMB signing or IPsec (IP security). They also lead to silent data loss in cases of WAN disruptions.
Introducing File Area Networks
Chapter 2: FAN Solutions
Using WAFS addresses these issues, ensuring that server and storage
consolidation projects not only meet but actually exceed user and
business expectations.
WAFS can deliver optimization for file-sharing protocols such as CIFS/
NFS using a distributed file system implementation. Maximum scalability is realized by a Single Instance Storage (SIS) cache between
remote sites and the data center. Data integrity is achieved using techniques such as proxy file locking, update journaling, and fully
asynchronous writeback. Branch office IT services on remote nodes
make it possible to replace all servers used for branch office functions. The node provides local print, DHCP, DNS, domain controller,
SMS, and other services out-of-the-box to branch office users.
To enable LAN-like access to application servers accompanied by highperformance data transfers for remote users, WAFS provides two categories of performance-enhancing technologies:
Acceleration to overcome latency and design limitations in CIFS/
NFS and TCP protocols
Data reduction to prevent redundant traffic from crossing the WAN
and compression to reduce the amount of data when it does need
to cross the WAN
A WAFS solution can be deployed in a number of different topologies to
match the architecture and data-sharing requirements of almost any
distributed enterprise. The simplest application can be illustrated with
a hub-and-spoke topology. The site that houses the data center application and file servers use a WAFS node to serve WAFS nodes
deployed at each of the branch offices in the enterprise. The WAFS
node at the branch office provides transparent LAN speed access to
servers. This architecture is shown in Figure 9 on page 29.
In this diagram, one central location and several branch offices have
been converted to a WAFS architecture. Before this conversion, each
branch office needed file servers and services—and administrators to
maintain these platforms. The data on the file servers needed to be
backed up and replicated somehow for disaster recovery. If something
went wrong at a remote site, it was often necessary to send senior IT
personnel with advanced troubleshooting skills from the central site.
The IT department in this example decided to consolidate all file data
and services into a central data center. This way, there is no need for
full IT staffing and senior IT administrators at the central site are not
needed for troubleshooting at the remote offices— greatly simplifying
Introducing File Area Networks
Chapter Summary
the DR solution. However, simply moving data to a central location typically slows file access to an unacceptable level from a user pint of
Figure 9. High-level WAFS architecture
With WAFS, the IT department can achieve its consolidation goals without user impact. The WAFS nodes at the remote offices require no dayto-day management. All backups are handled from the central location, since that is where the primary data store is located. However, the
WAFS nodes can serve replicated copies of users’ files locally and provide local services, so that user access times are unchanged.
A customer case studies that uses WAFS technology is described
briefly in the next chapter on page 45.
Chapter Summary
FAN solutions enhance existing network infrastructure in many ways.
Performance and reliability are increased, whereas cost, downtime,
and management complexity are reduced. The following chapters discuss specific products used to create FANs, and how to deploy them to
solve data center consolidation, data migration, DR, and WAN performance optimization challenges.
Introducing File Area Networks
Chapter 2: FAN Solutions
Introducing File Area Networks
Building Blocks
This chapter provides an overview of specific products that can be
used to create FAN solutions and includes the following sections:
“Brocade StorageX” on page 31
“Brocade WAFS” on page 35
“Brocade FLM” on page 38
“Brocade MyView” on page 40
“Brocade UNCUpdate” on page 41
“Brocade Unified FAN Strategy” on page 42
“Customer Case Studies” on page 43
“Chapter Summary” on page 46
The products discussed form the core of the Brocade FAN product line.
Only FAN-specific products are discussed. Solutions involving these
products also require additional products from Brocade partners. For
example, a FAN would need storage servers such as file servers or NAS
heads, block storage devices such as RAID arrays, connectivity
between those devices such as a Brocade Fibre Channel SAN, and
front-end connectivity such as an IP LAN and/or WAN.
Brocade StorageX
Each storage device that needs to be managed independently
increases the administrative costs in distributed file storage environments. As a result, today’s IT organizations require new ways to
simplify file system management, enhance data access, and reduce
data management costs.
Introducing File Area Networks
Chapter 3: Building Blocks
Brocade StorageX is an integrated suite of applications that logically
aggregates distributed file data across heterogeneous storage environments and across CIFS- and NFS-based file systems. It also provides
policies to automate data management functions. Administrators use
StorageX to facilitate such tasks as:
Data migration and consolidation
Business Continuity/Disaster Recovery (BCDR)
Remote site data management
Information Lifecycle Management (ILM)
Storage optimization
Data classification and reporting
StorageX provides administrators with powerful tools and policies to
perform such tasks without causing downtime for users. Moreover, it
directly addresses the needs of both administrators and users by
increasing data availability, optimizing storage capacity, and simplifying storage management—all of which contribute to significantly lower
costs for enterprise data infrastructures.
The foundation of StorageX is the global namespace (GNS), which unifies and virtualizes heterogeneous file data stored throughout an
enterprise by pooling multiple file systems into a single, logical file system. In general, it does for file storage what DNS does for networking:
it enables clients to intuitively access distributed files without knowing
their location (just as they access Web sites without knowing their IP
addresses). This transparency of the storage architecture can span
the enterprise and make the physical location of data irrelevant to
GNS provides the following key benefits:
Data management and movement are both transparent and nondisruptive.
Data changes are automatically updated and require no client
Administrators can manipulate storage without affecting how
users view and access it.
Data management and movement require far less administrative
effort and time.
Introducing File Area Networks
Brocade StorageX
Administrators can manage data on heterogeneous, distributed
storage devices through a single console.
Users have a single, logical view of files with access through a single drive letter.
StorageX allows administrators to seamlessly migrate and consolidate
data from multiple heterogeneous file servers onto one or more file
servers. GNS and StorageX migration policies shield users from physical changes during migrations, and the policy engine significantly
reduces the administrative tasks required. This, in turn, minimizes
user downtime, while access to data remains intact.
In addition, Brocade StorageX:
Enables automatic share creation
Copies security attributes
Integrates with NTFS Change Journal
Integrates with Microsoft VSS snapshots to enable open and
locked file copying
As organizations are faced with both planned and unplanned network
outages, business continuity planning has come to encompass much
more than just disaster recovery. Although many products currently
address the need to replicate data in preparation for a network outage, very few address the need to fail over users in a fast and
seamless manner. In addition to ensuring user failover, administrators
must find a way to centralize business continuity management in heterogeneous, distributed environments while keeping IT costs at a
minimum. StorageX uses GNS and disaster recovery policies to support cost-effective, seamless failover across geographically distributed
sites while giving administrators a centralized way to manage the global failover process.
StorageX provides administrators with a comprehensive solution for
managing geographically distributed data. With the GNS, administrators have a single view of data across multiple locations and can
manage it as a single entity. StorageX includes a Replication Manager
as well as policies for replicating data to a central backup location,
thereby eliminating the need to maintain a tape backup infrastructure
at each location. These policies enable file replications of any size and
any distance in heterogeneous environments.
Introducing File Area Networks
Chapter 3: Building Blocks
Definable data lifecycle management policies enable tiered storage
architectures, which provide an efficient approach to meeting ILM
requirements. StorageX uses GNS to facilitate the ILM architecture.
Moving data to secondary storage reduces hardware acquisition costs
while enabling administrators to align backup policies and investments with the business value of data. Using StorageX to create and
implement a tiered storage architecture, administrators can create
policies to automatically migrate data based on criteria such as age,
usage patterns, and file size.
Optimizing capacity utilization across the enterprise is another beneficial advantage of StorageX. Using GNS together with load-balancing
policies, administrators can create logical pools of storage that align
with business requirements, and then manage and optimize resources
assigned to the namespace. In addition, administrators can centrally
view and manage enterprise-wide capacity utilization—and transparently and non-disruptively balance capacity across multiple devices.
StorageX can be used to centrally manage data at remote sites as well
as facilitating consolidation and migration tasks within a data center.
Administrators can schedule policies to automatically replicate data
from remote sites to a central data center at set intervals. These policies can be scheduled to run during times of low network utilization,
and Byte-Level File Differencing Replication can be turned on to further conserve network bandwidth.
Once data has been replicated to a central location, it can be backed
up there, eliminating the need for tape backup infrastructure and
backup administration at each remote location. This entire process
can be centrally scheduled and managed through a single StorageX
user interface.
StorageX provides administrators with robust reporting capabilities
that enable them to support key IT projects. It uniquely enables administrators to classify data based on a variety of business-relevant
categories, including department, location, project, user group, file
age, file size, and last access time. After the classification criteria are
specified, administrators can run customizable reports to determine
data migration requirements and facilitate departmental charge-back
based on capacity usage.
As a non-proprietary software solution, Brocade StorageX fits seamlessly into existing IT environments. It integrates with and operates on
existing file systems, uses standard protocols, and requires no client
Introducing File Area Networks
Brocade WAFS
software or agents. Because StorageX is an out-of-band solution and
does not reside in the data path, it does not cause latency or performance issues.
This solution is discussed in more detail in “Chapter 5: Brocade StorageX” starting on page 73.
Brocade WAFS
Managing branch office IT infrastructures is often complex, time-consuming, and resource-intensive. Brocade Wide Area File Services
(WAFS) is designed to address some of the most difficult branch office
IT issues, including storage and server proliferation, backup and
restore management, and rising ownership costs. WAFS is an innovative solution that provides LAN-like access to files shared across Wide
Area Networks (WANs)—enabling organizations to consolidate their
assets while protecting remote data. For enterprises with distributed
locations, WAFS can streamline branch office IT operations, improve
operational efficiency, increase file performance across the WAN, and
centralize data management.
Brocade WAFS is available as software that runs on industry-standard
servers, Branch Office Software, or as a standalone appliance running
WAFS, as shown in Figure 10. The term “node” refers to a Brocade
WAFS appliance or a server running Branch Office Software. A “Core
node” resides in the central data center, and “Edge nodes” reside at
each remote site in the enterprise. The centralized Core node and
remote Edge nodes communicate through a Storage Caching-over-IP
(SC-IP) protocol optimized for the performance and reliability characteristics of a WAN. This design enables highly efficient data caching,
thereby eliminating the need for data replication, while still ensuring
file consistency and supporting global file sharing across the
Figure 10. Brocade WAFS Appliance
Introducing File Area Networks
Chapter 3: Building Blocks
One of the greatest advantages of Brocade WAFS is its ability to
streamline IT operations in remote locations. Because it enables distributed enterprises to centrally manage branch office IT services—
including file, print, network, and Web caching services— Brocade
WAFS helps reduce remote office servers, storage and backup hardware, software, and other related costs. This type of centralized
management significantly reduces the need for remote site IT personnel and expensive site visits, including the need to ensure proper
backups of branch office data.
With all users working from a single source file, organizations can collaborate as part of a global file-sharing process. Persistent logging
provides file protection and integrity, while caching and file differencing allow users to view changes in real time. In addition, multiple filehandling technologies accelerate access to files and their associated
applications, while also enabling continued productivity during WAN
Brocade WAFS is designed to improve both administrator and user productivity by enabling a more efficient IT environment. Key efficiencies
for file transport include:
Remote caching. Enables a single file copy to be shared among multiple users at a remote location.
Storage Caching over IP (SC-IP). Eliminates the transmission of CIFS
management calls across the WAN.
Wide area dictionary compression. Eliminates redundant transmission of file data through byte-level monitoring.
Performance flexibility. Supports asynchronous or synchronous writeback of file data based on file type and performance requirements.
File-aware differencing. Transmits only changes in the file data to
reduce WAN traffic and transport time.
File streaming to cache. Allows users to begin working with an application even before the entire file is available.
Introducing File Area Networks
Brocade WAFS
Bandwidth-throttling flow control. Manages file traffic by time of day to
meet Quality of Service (QoS) requirements.
WAFS was designed from the ground up for native integration with
Microsoft platforms in order to support secure and consistent file
access policies. Key support includes:
CIFS protocol management
Security mechanisms such as Active Directory, SMB signing, and
Kerberos authentication
Systems Management Server distribution services
To help organizations comply with their internal business objectives
and industry regulations, Brocade WAFS:
Is designed to survive common WAN disruptions, helping to guarantee data coherency and consistency
Supports encrypted files sent over the WAN for added security
IT managers need to protect data across all application and network conditions. WAFS does that, and utilizes synchronous logging to persistent
RAID storage for greater reliability. By increasing security and centralizing
file management, WAFS enables branch offices to leverage data centerclass regulatory compliance and disaster recovery strategies.
WAFS is ideally suited for large-scale distributed organizations. Typical
deployments involve supporting up to about 100 remote offices of up
to about 50 users each. However, it is capable of scaling to support as
many users for each remote node with up to 49 for each core. Organizations with even larger design problems can easily deploy multiple
WAFS cores, break their problem up into multiple smaller and more
manageable chunks. See “Appendix C: WAFS Sizing Guidelines” starting on page 197 for more information.
Brocade WAFS is a highly effective solution for cost-conscious organizations that want to:
Increase server and storage asset utilization to maximize the
value of existing IT investments
Reduce capital expenditures and ongoing management costs at
branch offices
Improve both administrator and user productivity
Free up network bandwidth to handle larger workloads
Gain the benefits of centralization and consolidation
Introducing File Area Networks
Chapter 3: Building Blocks
Brocade WAFS can be deployed independently, or it can be combined
with some of the other Brocade FAN offerings. For example, before
installing WAFS nodes in branch offices, organizations can perform
remote file server migration and consolidation to a centralized data
center using Brocade StorageX. This way, organizations can manage
WAFS data on heterogeneous, geographically distributed storage
devices from a single console. As a result, ongoing data management
and movement requires less effort and time. Moreover, the StorageX
GNS enhances business continuity by enabling seamless failover in
the event of a device failure.
This solution is discussed in more detail in “Chapter 6: Brocade WAFS”
starting on page 113.
Brocade FLM
In most IT environments, all file data is not created equal—especially
as data growth occurs unevenly on storage systems across the enterprise. Over time, imbalanced storage growth results in increased
expenditures due to the proliferation of storage devices, acute management challenges for IT administrators, and difficultly in ensuring
compliance with regulatory requirements.
However, IT organizations can reduce the total cost of storage ownership by actively managing data throughout its lifecycle from creation to
disposal, and by aligning storage policies with business priorities and
regulatory requirements.
Brocade File Lifecycle Manager (FLM) is an administratively defined,
policy-based data solution that manages the lifecycle of file data on
Network Appliance (NetApp) storage systems. FLM helps administrators by analyzing, classifying, blocking, moving, organizing, and
deleting file data—using policies defined with organizational governance objectives that reflect the business value of data.
The FLM solution optimizes the management of data based on
attributes and access patterns, while maintaining user access to files.
In addition, FLM simplifies compliance through information policybased retention and maintenance.
By deploying FLM and implementing tiers of disk-based storage, organizations can achieve significant reductions in overall storage
expenses. FLM can also result in increased volume capacity as inactive data is automatically moved to secondary storage.
Introducing File Area Networks
Brocade FLM
Separately, FLM reduces the costs associated with backup—including
tape hardware, backup software, and management costs—as less
important data gets transferred to secondary storage.
Brocade FLM supports a wide range of policy parameters based on
corporate data storage and retention guidelines to simplify data
Adaptive archival algorithms. Excludes certain restored files from
future migrations.
Archival storage manager. Defines multiple offline stores as a pool,
eliminating the need to set up large volumes as secondary stores.
Blocking. Prevents unwanted file types from being stored on the primary storage.
Classification. Analyzes and classifies file data based on parameters
such as location, type, name, age, size, attributes, and volume
Migration automation. Periodically migrates files that match the
defined criteria and scope to multiple secondary storage devices.
Primary space optimizer. Automatically re-migrates a restored file
back to the secondary store.
Restore control. Minimizes impact on storage capacity utilization and
network bandwidth resulting from inadvertent data restores.
Retention/deletion. Deletes files matching parameters, such as those
surpassing long-term retention limits.
Simulated migration. Performs detailed “what if” analyses and
reports the results to support capacity planning and network bandwidth utilization planning.
This solution is discussed in more detail in “Chapter 8: Brocade FLM”
starting on page 153.
Introducing File Area Networks
Chapter 3: Building Blocks
Brocade MyView
IT organizations routinely need to answer complex questions about file
permissions, such as which categories of files a given user can access
or which users can access a given classification of files. Unfortunately,
file data (and the right to access it) is one of the most difficult items to
control in the enterprise, because the information and management
thereof tend to be highly distributed.
To complicate matters, most current IT solutions do not focus on
securing file data against threats from within the enterprise, where the
majority of security breaches actually occur. It is well established that
the vast majority of security problems are internal, not coming from
outside hackers or Internet-bourn viruses, but from disloyal or disgruntled employees. In light of this, it is odd that current security solutions
do not adequately provide administrators with a comprehensive view
into internal enterprise file data access rights. Equally problematic is
the fact that users do not intuitively know which resources they can
access and where those resources are located.
Brocade MyView is a global resource access management solution
that provides personalized, secure access to Windows file resources
across the enterprise. With this innovative solution, organizations no
longer have to sacrifice their business objectives to achieve their data
security compliance requirements.
A key component of Brocade MyView is the Global Resource Access
Matrix (GRAM), which gives administrators and auditors a central view
of user access rights to distributed data as well as detailed reporting
capabilities. This unique approach results in tighter security practices
and simplified auditing for Sarbanes-Oxley, Federal Information Security Management Act (FISMA), Health Insurance Portability and
Accountability Act (HIPAA), Personal Information Protection and Electronic Documents Act (PIPEDA), and other security compliance
requirements. In addition, MyView dynamically builds personalized
namespaces for every user in an enterprise to simplify access to data
and further enhance security.
Brocade MyView addresses internal data security requirements by providing a fast, automated way to generate a comprehensive GRAM. The
GRAM identifies which users have access (including the level of
access) to which file resources across the distributed enterprise based
on identity management, user credentials, and security permissions.
Innovative technology enables the GRAM to efficiently support thousands of users and resources across the enterprise, dynamically
computing resource access changes as necessary. As a result, admin40
Introducing File Area Networks
Brocade UNCUpdate
istrators can easily report and manage security permissions to modify
user access to resources. The GRAM is constantly updated to reflect
the current state of user identities and resource access permissions.
Brocade MyView provides a fast and efficient way for administrators
and auditors to produce comprehensive reports that help identify
potential security breaches and enforce internal data security policies.
This solution is discussed in more detail in “Chapter 7: Brocade
MyView” starting on page 145.
Brocade UNCUpdate
As many IT organizations have discovered, storage migrations can be
extremely disruptive to business operations. If users and applications
are tied to specific share and file locations through hard-coded Universal Naming Convention (UNC) path names or shortcuts, administrators
need to take extra precautions to ensure that such linkages are maintained as part of the migration process. UNC references can occur
inside a variety of documents, applications, and other files, which
makes this operation particularly difficult to manage.
Brocade UNCUpdate gives storage administrators a reliable tool for
reporting on files that contain UNC references to storage resources
under consideration for migration. UNCUpdate can report on UNC
entries encountered in a wide variety of files, providing administrators
with the option to update UNC references to target the storage
resource at its new location.
The easy-to-use UNCUpdate interface helps administrators migrate
data to a new location by scanning directories for files containing UNC
paths, which might need to be updated to reference items at their new
location. It enables administrators to locate embedded UNC paths
stored in a file in clear text. As a result, administrators can locate and
modify UNC paths in text files, Microsoft Word, PowerPoint, and Excel
documents, as well as update Windows Explorer shortcuts.
UNCUpdate provides significant advantages to both administrators
and users. File migration without Brocade UNCUpdate can be a manual, labor-intensive process that requires locating and updating usergenerated shortcuts and embedded lings within files. By automating
the link synchronization process for files containing UNC paths that
need to be updated, Brocade UNCUpdate promotes business continuity—helping to eliminate potential user downtime caused by broken
links, shortcuts, and UNC references following a migration.
This solution is discussed in more detail in “Chapter 9: Brocade
UNCUpdate” starting on page 183.
Introducing File Area Networks
Chapter 3: Building Blocks
Brocade Unified FAN Strategy
Each of the Brocade FAN-enabling products can be used in a standalone manner with storage products from a variety of Brocade partners.
Alternately, many of the FAN products can be used in combination with
each other. The combination of StorageX and FLM was discussed earlier in this chapter, and many complex combinations are also possible.
Brocade UNCUpdate is a natural fit for any migration project, for example: it can greatly simplify the initial deployment of a GNS during rollout
of StorageX by reporting on and updating UNC links within files.
Of course, additional testing and validation is appropriate when rolling
out combined solutions, and in some cases it would be appropriate to
stage rollout over time. For example, if you decide to use StorageX and
FLM, it might be practical to roll out StorageX first, and then add FLM
later rather than adding both at once. This would allow administrators
to develop and refine site-specific policies and practices associated
with the first product before introducing the second.
This caveat is not just a FAN best practice. As a general best practice
with any new technology, you should take extra care when deploying
more complex solutions. Brocade is committed to continuing its testing, validation, and development efforts to expand and simplify the
number of combined solutions that are supported, and customers can
look forward to an increasing number of pre-certified solutions. Brocade and its partners offer complete solutions to meet a wide range of
technology and business requirements, including education and training, support, and professional services to help IT professionals
optimize their technology investments.
Figure 11 illustrates one way a unified FAN deployment might look, if
all of the products discussed in this chapter were used in concert
within a single IT environment.
Introducing File Area Networks
Customer Case Studies
Figure 11. Brocade Unified FAN Strategy
Customer Case Studies
The first solution in this section was designed to meet the following
challenge: to eliminate the costs and risks of local data storage at a
large number of geographically distributed sites and centralize file
management for rapid migrations and consolidations. The second was
designed to improve file access speed and performance, version control, and remote office backup/DR for dispersed Engineering teams
sharing extremely large CAM/CAD files, often greater than 100 MB in
StorageX for Data Replication and Protection
The enterprise in this case study, Acme Inc., has been in business for
over 100 years as a leading global manufacturer and retailer of construction materials. It generates enormous amounts of unstructured
file data ranging from spreadsheets and databases to complex material specifications. Much of this information must be stored, backed
up, and readily accessible on a daily basis.
Introducing File Area Networks
Chapter 3: Building Blocks
Previously Acme relied on tape storage at over 100 of its 500 sites,
with local employees expected to back up all files and store them manually for safekeeping. However, often data went unsaved and
jeopardized the availability of business-critical information. At the time
it was evaluation a Brocade FAN solution, Acme was planning to implement a new Point-of-Sale (POS) initiative to replace legacy terminals
with PCs and provide file sharing with new Windows servers dispersed
among its retail store locations. To improve efficiency, the IT staff was
looking for a single solution to address their key data management
The solution that made the most sense to Acme was a file data migration, replication, and recovery solution leveraging Brocade StorageX as
illustrated in Figure 12.
Figure 12. Acme’s file data replication and protection strategy
Introducing File Area Networks
Customer Case Studies
StorageX and WAFS Working Together
ABC Design is ranked among the nation’s top design firms and
employs more than 500 staff in 20 offices in the southeast. Using a
distributed business model, ABC offices are responsible for providing
local resources and staff on a regional basis. The remote offices typically enlist ABC Engineers throughout the company’s enterprise to
deliver the necessary expertise in land development, transportation,
utilities, water resources, and ecology. No matter which branch office
is managing the project, the strength of the entire enterprise is available, collaborating effectively as if they were all in one place.
To accomplish this objective, ABC relies on its WAN, however, the sharing of massive Computer-Assisted Design (CAD) files, often 100 MB in
size, was becoming more and more unworkable. For example, one
user might have to wait up to 30 minutes to download a file, and if
another user was updating the same file, an entire day’s work was
sometimes overwritten.The challenge: a centralized data management
solution that would enable real-time file collaboration among the Acme
Introducing File Area Networks
Chapter 3: Building Blocks
The Brocade WAFS and StorageX combination provides an economical, function-rich FAN solution throughout the enterprise as illustrated
in Figure 13.
Figure 13. ABC’s enterprise data consolidation and protection strategy
Chapter Summary
File Area Networks are constructed using some number of enabling
products in combination with standard IT infrastructure. File servers,
NAS heads, IP networks, and block storage devices can be enhanced
using FAN-specific products such as those discussed in this chapter.
The goal of the FAN designer is to select appropriate enabling products
to supplement their existing IT environment. Doing so simplifies management and improves the overall customer experience for their users.
The remainder of this book discusses ways to design FAN solutions
using these products, and then how to implement and manage those
Introducing File Area Networks
Design Considerations
This chapter will familiarize you with the major considerations involved
in FAN design. It discusses some of the choices that a designer must
make and factors to think about when making these choices in the following sections:
“Compatibility” on page 48
“Network Topologies” on page 50
“Reliability, Availability, and Serviceability” on page 52
“Performance” on page 61
“Scalability” on page 62
“Total Solution Cost” on page 63
“WAN” on page 63
“Implementation and Beyond” on page 66
“Planning for Troubleshooting” on page 70
“Chapter Summary” on page 71
Some of the subtopics in this chapter are complex and have entire
books devoted to them. In such cases, this chapter provides only an
overview: a comprehensive discussion is beyond the scope of an introductory treatment.
As with most IT infrastructure design, designing FANs involves balancing conflicting needs and reaching a compromise when necessary. For
example, the requirement to deploy a low-cost network conflicts with
the requirement to deploy a network with high availability. Because of
this, FAN design is often about making trade-offs. This chapter discusses FAN requirements and provides insight to help designers make
the right decision among conflicting requirements.
Introducing File Area Networks
Chapter 4: Design Considerations
The intent of this book is not to lay down rigid laws of FAN design.
Since every network has slightly different requirements, there are no
absolute rules about the “right” way of doing things that can cover all
scenarios. The intent is to show you design areas that you need to consider, and to provide recommendations based on industry-accepted
guidelines and best practices.
The first sections cover some of the areas of concern that apply to any
FAN design, regardless of variables such as protocol, distance, topology, or performance. These factors have such wide-ranging
applicability that it is important to keep them in mind when making all
subsequent evaluations. These sections discuss the areas of general
concern, and then describe how they apply to FAN technology in
The first thing to consider when designing a FAN is the compatibility of
each piece of infrastructure equipment with the rest of the infrastructure, and with each host or storage device which may interact with it. If
devices are not compatible, then the network will not function, so any
further considerations are simply irrelevant.
When devices are compatible with each other, it means that they are
capable of being connected to each other, either directly or across a
network. It means that they can “talk.” This can apply equally to hardware and software.
For example, it is necessary for a Network Interface Card (NIC) driver to
be compatible with the operating system in its host. If the driver only
works with Windows and the host is running Solaris, they are incompatible. This is not an example of a piece of hardware being
compatible (or not) with a piece of software, but rather of two software
packages being incompatible with each other. Similarly, the NIC hardware must be compatible the hardware in the host: a Peripheral
Component Interconnect (PCI) NIC cannot be installed in a Solaris host
that has only free Serial BUS (SBUS) slots.
Fortunately, File Area Networks rarely have compatibility issues at this
level. When such compatibility issues do arise, they can be handled by
IT personnel using standard troubleshooting techniques. Virtually all IT
organizations have personnel and processes in place to handle this
level of compatibility problem.
Introducing File Area Networks
However, in the context of a FAN, compatibility applies to all hardware
and software components used from end to end—from the application
to the final storage location. For example, it is possible for a NIC to be
compatible with a storage device and the LAN hardware in-between,
but still not be compatible with a feature of the LAN such as Virtual
LAN (VLAN) tags. Similarly, a FAN application such as Brocade StorageX needs to be compatible with its host OS, its host’s NIC, the LAN
equipment used, the network file system protocol in place, and the
storage device(s) that it is managing.
Areas to evaluate for compatibility include:
Upper Layer Protocol
Do all devices in the chain use the same protocols, for example,
do all devices follow the NFSv3 standard, or are some devices
using NFSv2 or NFSv4? Do they all use the standards in the same
way? Even if the correct packets are being sent between two
devices, the vendors still need to have worked together to insure a
consistent interpretation of the standards.
Node to Node
Protocols such as NFS operate between nodes, but these are not
the only node-to-node behaviors that can cause compatibility
issues. For example, if a volume used by a host is being synchronously replicated, a timing issue between primary and replicated
storage can delay commitment notices to the application. With
performance sensitive applications, this can cause more than just
a slow response.
Node to Switch
Will the node (host or storage) be able to attach to the LAN? This
requires that it follow certain protocols defined in standards, and
implementation details must also be compatible. For example, if a
NIC is using jumbo packets and its switch does not support the
feature, the connection will fail.
Failure Cases
Evaluate all components in the chain: cables, media, switch, even
the entire network. Look at each component and decide how the
node will behave if that component fails completely, or what will
happen if it becomes unstable. (Unstable components can be
worse than failed components.) Sometimes components are compatible in steady state, but incompatibilities manifest when errors
Introducing File Area Networks
Chapter 4: Design Considerations
Without end-to-end compatibility at all layers, it is not possible for a
solution to work at all. Even if a solution has end-to-end compatibility
at the transport layer, an incompatibility at the application layer could
still prevent it from being a viable option. Brocade has vastly more
experience in this area than any other vendor where SANs are concerned, and has applied this expertise to FAN testing. However, just
using Brocade solutions cannot guarantee end-to-end compatibility
between nodes—it is still necessary to evaluate whether or not a particular host will work together with a particular NAS head, and so on.
Network Topologies
There are a number of different definitions for the term “topology”
depending on context. In the context of network design, it is most common for topology to refer to the geometrical arrangement of switches,
routers, and other networking infrastructure elements that form the
network infrastructure, and the way in which these elements communicate with each other.
There are literally an infinite number of possible topologies, and the
robust architecture of IP routing protocols does allow arbitrarily complex networks to be built. Fortunately, in the vast majority of real-world
cases, it is possible to use simple solutions. This means that few topologies are typically used as the basis for FANs, and these are combined
or varied to fit the needs of specific deployments. These topologies are
popular because they yield networks with superior scalability, performance, availability, and manageability. In fact, the choice of topology
has such a profound impact on the properties of a FAN that many
designers consider it to be the single most important decision point in
the design process.
Topology Names
When a network diagram is created, the infrastructure equipment can
be seen to form geometrical shapes. Topologies are named based on
the shapes that they create. The most common topologies for FANs
Introducing File Area Networks
Network Topologies
It is no coincidence that these FAN topology names are familiar to SAN,
LAN, MAN, and WAN professionals. The same principals that make
these designs function in other networking arenas cause them to be
used in FAN design as well.
However, it is important to realize that topology names usually mean
something a bit different in the context of a FAN. This is because a FAN
is an overlay network.
Overlay Networks
Computer networks all use protocol stacks, and this allows for greater
flexibility. For example, the SCSI protocol is not inherently networkable.
It can address only a tiny number of devices and has no concept of
routing. It is not scalable or flexible at all. However, when SCSI is
mapped on top of Fibre Channel, it becomes massively flexible and
scalable. (At this time of this writing, Brocade has worked with several
customers designing multi-hundred-thousand-port FC solutions.)
When laying a non-networking protocol (for example, SCSI) on top of a
networking protocol (for example, FC), the topology is straightforward.
It is the topology of the networking protocol that is relevant.
However, if a designer lays a more advanced protocol on top of a networking protocol, then each of those layers can have a relevant
topology. For example, it may be that six sites are connected to each
other using a ring topology WAN. This design could have been dictated
by the availability of long-distance links or by financial considerations.
In this case, the term “ring” defines the physical topology of the IP
But perhaps five of the sites are branch offices and have WAFS nodes
that use the sixth site as a central storage consolidation location. In
this case, the WAFS design also has its own topology: a star. There are
five arms radiating from a central point. The star topology of the
higher-level networking component sits on top of the lower-level ring.
It is important for FAN designers to understand both levels of network
topology, as both relate to the way in which the FAN will perform. The
underlying IP network will have performance, reliability, and availability
characteristics defined by its ring topology. This means that the FAN
will inherit these characteristics. However, the FAN will also have characteristics related to its star topology. For example, the point of the
star is a central management location, which is not a characteristic of
a ring topology.
Introducing File Area Networks
Chapter 4: Design Considerations
Ideally the physical topology of the underlying network and the logical
topology of the overlay network should be the same, but this is not
always practical. FANs are almost always built on top of pre-existing
networks, and the FAN designer usually has to adapt to what is already
in place. At most, the designer is limited to making incremental adjustments to the existing network topology. The important thing in such
situations is to make sure that the relationships between the different
levels of topology are understood before deployment, so that any necessary adjustments to deployment plan, FAN design, or existing
network can be made before release to production.
Reliability, Availability, and Serviceability
These three topics are often collectively referred to as “RAS.” While
each RAS component is substantially different from the others, they
each have an effect on the overall functionality of a product, network,
or solution. Designers must keep RAS in mind when selecting components and when choosing a top-level network architecture.
Reliability is a measure of how much time a component is statistically
expected to be working versus how much time it is expected to require
service for failures. Hardware and software components both have
reliability characteristics. If fact, so do underlying networks, overlay
networks, and end-to-end solutions.
One way to look at reliability is that it is a measure of how often service
personnel need to “touch” a system. A “reliability event” occurs any
time service is needed, even if the component is still online for the
duration of the service. For example, if a LAN switch has redundant hot
swappable power supplies and one fails, the switch will remain online
during the replacement, but a reliability event will still have occurred.
Designers must consider reliability when selecting system components for two reasons:
1. Higher reliability means lower ongoing support cost. Each time a
component fails, a support action must take place. There may be
hardware replacement costs, support personnel costs, and opportunity costs when people repairing a system are not working on
other more strategic activities.
2. In some cases, low component reliability can translate into system-wide downtime. In other words, a reliability event can become
an availability event, as will be shown in the next section.
Introducing File Area Networks
Reliability, Availability, and Serviceability
The most common measures of reliability are Mean Time Between
Failures (MTBF) and Mean Time To Repair (MTTR.) Respectively, these
refer to how often a component is expected to fail, and how long it generally takes to fix the component when a failure occurs. Designers
should look for components with high MTBF and low MTTR.
Reliability applies to FAN solutions in a number of ways, and FAN
designers should understand and account for each of these.
Underlying Network Reliability
The largest impact of reliability events usually relates the staffing burden of ongoing maintenance. It may or may not be the responsibility of
the FAN team to maintain the underlying network, but even if not, there
are still potentially relevant secondary considerations. For example, as
indicated above, a reliability event can potentially become an availability event. If the underlying network becomes unstable or unavailable, it
will not be possible for the overlay network to function effectively. FAN
designers should evaluate how reliable the underlying network will be,
who will be responsible for maintaining it when components fail, and
how likely failures are to create instabilities and/or availability events.
File Server/NAS Head Reliability
The same two general statements apply to the reliability of the platforms serving the FAN file systems. However, the impact of a failure at
this level can be radically different. If an underlying network path fails
between a host and its storage, perhaps the network will have resiliency built in which can provide an alternate path rapidly and
transparently. In most modern LAN/MAN/WAN designs, this is the
case, but it is rarely the case with NAS heads or file servers. Instead,
clustering approaches are required to create resiliency at that level.
The FAN designer should understand the reliability of the file system
server platforms, who is responsible for their maintenance, which reliability events can become availability events, and what is likely to
happen to the FAN if a file system does go away during such an event.
FAN Platform Reliability
FAN platforms are the systems on which FAN-specific software is running. This could include WAFS nodes, general purpose servers running
Brocade StorageX, NAS heads running FLM components, or servers
running GNS and other similar services. The same general categories
of impact consideration apply: who will fix it if it fails, and what is the
likely impact to availability. The distinction is that FAN platforms are
likely to have a direct impact on FAN operations, and it is almost
always the responsibility of the FAN team to deal with such issues. It is
the job of the FAN designer to ensure that FAN platforms have good
Introducing File Area Networks
Chapter 4: Design Considerations
reliability metrics such as high MTBF and low MTTR, and that the overall FAN architecture is robust enough to withstand the outage if a
reliability problem becomes an availability event. This is generally
accomplished through the use of redundant components such as
power supplies, in combination with some variation on clustering.
Availability of a system is a measure of how much time it is able to perform its higher-level functions, such as serving files to end users. Since
availability is generally considered the most important RAS element,
this subsection is more extended than the other two. As with reliability,
both hardware and software play a role. However, availability is not
necessarily impacted by failures in components.
For example, assume that the system being analyzed is a network file
system. Ultimately, the files on that file system “live” on block storage
devices, so the availability analysis would start there. Most modern
enterprise storage devices have redundant power supplies to handle
failures gracefully. If one fails, the system has experienced a reliability
event, since the dead power supply will need to be “touched” by service personnel. Indeed, more supplies implies more frequent supply
failures, so redundant supplies will actually decrease reliability. However, the system and its application are still available during this
failure: the storage device can get power through the other supply. At
some point, someone will need to replace the supply, but even this will
not cause downtime.
File system availability is usually considered to be the most important
consideration in FAN design overall. Problems with availability have an
impact at the end-user level, rather than being contained within IT. In
the previous example, the failure caused a reliability event which
required service personnel to replace the part, but the file system
stayed online. If a non-redundant component failed—such as the host
operating system or motherboard on the file server—then an availability event would have occurred, which would require both service
involvement and application downtime. Given a choice between more
frequent reliability events versus more frequent availability events,
most designers chose the former.
This is important because often the measures instituted to increase
availability also decrease reliability. If each system had only one power
supply, then statistically there would be fewer service events. However,
when a reliability event did occur, it would also cause an availability
Introducing File Area Networks
Reliability, Availability, and Serviceability
The most common measure for this system characteristic is “nines of
availability.” As in, “this application has five nines of availability.” It
also may be abbreviated to simply “nines,” as in, “this is a five-nines
app.” This measure refers to the percentage of time that a system is
available. “Five nines” means that a system is available 99.999% of
the time or more. To put it another way, it means that the system is
unavailable no more than about 0.0001% of the time, which comes to
about five minutes each year.
In order to achieve five nines of availability for a FAN, designers must
analyze each component involved in the overall system. This includes:
block storage on which the file data resides
The SAN and FAN networks
The file servers or NAS heads
Name and directory services
End user client service failover mechanisms
Data center power infrastructure
and so on. Each component—hardware or software—could cause an
outage, so each component should be examined for weaknesses, and
you should consider how to recover when these weaknesses become
reliability failures, so that they do not become availability events.
In the context of FAN design, it is appropriate to consider the availability characteristics of each attached device, each infrastructure
component, and the network itself. Any system or application is only as
available as the weakest link. To build a Highly Available (HA) NAS system, it is not sufficient to have an HA cluster between NAS heads. It is
necessary to consider all of these factors: availability on the back-end
block storage infrastructure, multipathing software to handle path
failover on the back-end, highly available and multi-ported storage subsystems, redundancy in the front-end IP network, and clustering
software are some of the components that may make up such a
However, the most significant consideration for most designers is the
availability of each application supported by the FAN. From the point of
view of end users, it really does not matter if an individual part fails.
They are interested in whether or not their application remains online.
Designers should consider each component of the FAN,—but only in
relation to how a failure in any of those components will affect application availability for users.
Introducing File Area Networks
Chapter 4: Design Considerations
Single Points of Failure
The first principle of HA theory, for FANs or any other kind of system, is
this: One of anything is not HA.
This means that any component is considered to be a single point of
failure unless it is fully duplicated. This includes all hardware and software, up to and including the front-end IP network itself. In order for a
component to be considered highly available, it has to be duplicated,
and the duplicate should not be directly connected—either at the hardware or software levels—to the component it is protecting. After all,
whatever caused the primary component to fail could easily impact the
duplicate as well, if they are tightly coupled.
To characterize a pair of components as “tightly coupled” describes
both logical and physical relationships. Even a subsystem that has an
HA architecture internally is still a single point of failure, because all
components are physically located in the same place. Brocade Fibre
Channel SAN directors have redundant power supplies, fans, CPUs,
operating system images, bandwidth switching components, and so
on. Every active component is duplicated, and software mechanisms
are provided to ensure rapid failover between these components. Even
if a major failure took down both a CPU and a bandwidth switching
blade simultaneously, the director would continue to forward frames
without even a fraction of a second of downtime. And yet, if a sprinkler
system went off overhead, the entire chassis could fail as a unit. A FAN
with redundancy at every level on the front-end, but with the entire
block storage solution connected through a single FC director, would
not be a true HA solution.
Following this train of thought to its furthest extreme, designers could
end up attaching HA clusters of file servers to highly redundant components such as Brocade directors, in a reliable fabric design such as a
resilient CE architecture, with a second resilient fabric for redundancy
in case there is a fabric-wide problem with the first fabric, and replicating that entire setup in a second data center. After all, even a site is a
single point of failure. (See Principles of SAN Design for more information about redundant back-end block storage designs.)
Of course, this approach is somewhat expensive. Most customers do
not have the budget for all of that equipment or the personnel to install
and maintain it over time. The next logical thought is to see in which
areas redundancy can be eliminated without compromising application availability—or at least without compromising it too much.
Understanding which areas to eliminate to save on cost requires evaluating their relationships within the HA stack.
Introducing File Area Networks
Reliability, Availability, and Serviceability
High Availability Stack
It is possible for components to be redundant in either horizontal or
vertical relationships. If a director has two power supplies (PS), and
can operate on just one, then the PS units have a horizontal redundant
relationship, because they are at the same “level” in the network. Similarly, if two directors are used and a host has redundant connections
to them, or if two hosts form an HA cluster, then they have a horizontal
relationship as shown in Figure 14.
Figure 14. Horizontal redundant relationships
Components in a FAN also have vertical relationships. The power supplies can be considered “below” the directors, which are “below” the
host’s HA clustering software. Vertical relationships are not necessarily relevant to HA, though. Having two power supplies in a horizontal
relationship within one director implies an HA design strategy, but having one power supply and one director in a vertical relationship does
not, because the components do not back each other up. Vertical relationships imply a dependency that requires redundancy. The
positioning of the director above the power supply indicates that the
director is dependant on the supply, and thus the supply had better be
redundant if the director is critical to the overall solution.
It is worth taking a moment to examine this HA scenario in more detail.
Remember that, while this example discusses availability on the backend of the FAN, similar architectures apply the same way to the frontend.
Introducing File Area Networks
Chapter 4: Design Considerations
Since there are two directors in a horizontally redundant relationship
in the figure, an entire director can fail without causing downtime for
either host or their applications. This may “trump” the need to have
redundant power supplies within each director: a redundant power
supply is intended to prevent a director from failing, but a director failing will not cause an application outage in this example.
In that case, why not save on equipment cost by removing the extra PS
units from the design? To answer this, it is necessary to examine the
HA stack in more detail. Table 1 illustrates the HA scenario from
Figure 14 on page 57 as a stack. If a UPS, power grid, or data center
fails, the application most likely will be down. The power supplies,
fans, central processors, and core cards of Brocade directors are all on
the same “tier” of redundancy, so they are shown on a single line.
To decide if redundancy is really needed at a given layer, evaluate what
happens to upper layers if a failure occurs. For example, a lack of
redundancy at the power grid level will impact both directors simultaneously, which will “take down” all higher layers regardless of their HA
Power supplies work somewhat differently. When there is no redundancy at that level, but there is redundancy at the switch or director
level, then it does not imply that the SAN has only one PS unit. There
will be at least one different PS unit in each director, even if that layer
is “non-redundant,” simply because a director cannot work without
Table 1. HA layers
Data center
Server clustering
UPS protection
Power grid
Introducing File Area Networks
Reliability, Availability, and Serviceability
With that in mind, when there is a lack of redundancy at the PS level, a
failure there will cause one and only one director to go down. If that
happens in the example shown in Figure 14 on page 57, the entire fabric will have failed. If that happens, the HBA attached to that fabric will
fail. However, because there are redundant HBAs connected to redundant fabrics, no layers above the HBA will be impacted. The failure will
have been “caught” by the multipathing driver which supports the
redundant HBAs. But what does that actually mean?
Generally speaking, it means that the application (that is, the network
file system served by the FAN) will sit idle for some number of seconds
or even minutes while time-outs propagate up and down its driver
stack, before the failover occurs. When it does switch to the alternate
path, there is risk: a chance that something could go wrong, due to
either a configuration error or a simultaneous failure of any component in the redundant fabric stack. In the best case, an administrative
action is likely to be required to fix the SAN, and to switch the host
back to the original fabric once the power supply has been replaced.
To make matters worse, in almost all SANs there will be more than one
host connected to the failing director, which means that a single power
supply failure will impact—to one extent or another—potentially hundreds of applications. This magnifies the impact of failover risk (for
example, multipathing configuration errors, and so on) and the
amount of effort needed to recover from the event.
Finally, the larger the scope of a failure, the more likely that a simultaneous failure will render some or all of the HA protection ineffective. If
an SFP fails on a “Director 2” port attached to a storage array, and a
simultaneous failure occurs on a “Director 1” power supply, then all
applications relying on the storage array port will lose both paths to
their data at the same time.
The bottom line is that, the higher up the HA stack a problem gets
before it is caught, the more complex the solution that finally catches
the failure is likely to be. Server clustering and multipathing are more
complex to design and implement than power supply redundancy. The
higher up the stack an error propagates, the more risk there is that the
redundancy mechanism will malfunction, and the more administrative
effort will be required to remedy the problem even if all goes well. The
net effect is that, if it is possible to “trap” a problem at the power supply level, then it is best to do, even if it could have been trapped at a
higher level.
Introducing File Area Networks
Chapter 4: Design Considerations
This leads to the second major principle of HA theory, which is: Always
trap an error as low in the stack as possible.
The reason designers use redundancy at multiple vertical layers (for
example, power supplies and fabrics) is that this method makes it
much less likely that any given error will result in application downtime, and makes it much easier to recover from any problem at the
lower levels in the stack.
So how does this apply to FAN design? Refer to Figure 2 on page 6 and
the bullet list below it. In order to be fully redundant, the FAN must
have redundancy engineered into each and every vertically related
component between the client and the block storage which contains
the file data. (Note that the figure is rotated sideways to accommodate
they layout of the book, so in this case “vertical” relationships are
actually horizontal.) The principle for designers to take away from this
is that HA FAN solutions should use fully redundant designs, with each
redundant component preferably having a resilient internal design. If
this is not possible for cost reasons, then the single points of failure
must be documented, and plans must be in place to handle failures as
rapidly as possible.
Serviceability is a measure of how easy it is for service personnel to
perform their jobs with respect to a particular product or system. This
is largely a subjective measurement. As with the previous two RAS
metrics, serviceability can include hardware, software, solutions, and
even the overall network architecture.
This is an important consideration for designers for two reasons:
Products with better serviceability tend to cost less to manage on
an ongoing basis. For example, if a product is hard to use, more
money will need to be spent on training, support, or on outside
Serviceability can affect uptime. Products that can be serviced
faster can recover from outages faster. Also, more complex products are more likely subject to human administrator errors.
MTTR can be viewed as a serviceability metric as well as playing a role
in reliability. If a component fails, how long will it take service personnel to fix it? The smaller the number for MTTR, the faster service was
performed. This implies better serviceability. FAN designers should
look at the MTTR for components such as NAS heads, and block storage back-end devices.
Introducing File Area Networks
However, this is not the most important aspect of serviceability, and
most other aspects cannot be described by well-defined statistical
metrics. For example, many products have built-in diagnostic tools.
Simply counting the number of tools would not provide a useful metric,
since each tool might have or lack features that affect how easily a
system can be serviced. Doing a thorough evaluation of serviceability
as part of the approach to component selection requires substantial
effort on the part of the designer. Since serviceability is rarely the top
concern when making a product selection, the time is usually better
spent elsewhere. As a result, many designers use a simpler approach
of using other criteria to make component selection decisions.
There are several areas to consider when thinking about performance
in a FAN solution, including protocols, end-point behaviors, link rates,
congestion, and latency. Trade-offs often need to be made between
these and other performance considerations versus cost.
For example, all other things being equal, extending a CIFS file system
over a WAN causes a reduction in performance when compared to running that file system across a LAN. WANs are always slower than LANs,
if for no other reason than that the speed of light dictates higher
latency in a WAN. However, the CIFS protocol in particular was not
designed with WAN performance in mind. In order to achieve acceptable performance, a protocol-aware acceleration technology is
required. FAN designers attempting to extend file systems across
WANs therefore tend to deploy WAFS nodes, after analyzing the protocol-related performance considerations.
Thinking about performance is particularly important when designing
Disaster Recovery (DR) solutions. (This general category includes
backup/restore as well for the purposes of this book.) In particular,
designers should consider how much time it will take to recover from
various different outage scenarios. This is another reason why a
designer might choose one FAN technology over another.
The designer should keep performance in mind when evaluating any
candidate design, and pick the FAN tools best suited to solving the
problem at hand. Be sure to make decisions that can support the performance requirements of the initial deployment, and all anticipated
future increases in performance demand. Network performance
requirements tend to increase rather than decrease over time, and so
all protocol and topology choices should be able to accommodate a
wide range of performance scenarios.
Introducing File Area Networks
Chapter 4: Design Considerations
Scalability can mean quite a few different things in the context of file
networking. For example, it can be a measure of how much data a particular NAS product accommodates. (A NAS box is may be considered
more scalable than another one because the drive cabinet is larger
and therefore it can accept more disks.)
However, in the context of FAN infrastructure design, it usually refers to
two things:
How many clients are supportable?
How much total file data can be added before the network needs
to be restructured?
(A network design may be considered more scalable than another
one because each Brocade StorageX server has more CPU power
and can therefore support more clients.)
In theory, a Brocade FAN solution should be infinitely scalable. However, many real-world considerations can cause lower scalability as a
practical matter. For example, a FAN design might hit a level at which
the IT team cannot deploy WAFS nodes fast enough to meet user
demand. While that example does not illustrate a fundamental scalability issue with WAFS, it does illustrate a practical scalability limit of
deploying WAFS with limited IT resources.
Designing a FAN to scale to the largest size it could be expected to
need to grow to in a reasonable time frame is always more practical
than merely meeting the target requirements at the time of implementation. This prevents the FAN from being “painted into a corner” and
needing to be fundamentally restructured after entering production.
Because of this, designers must consider scalability carefully when
deciding on a top-level architecture.
Introducing File Area Networks
Total Solution Cost
Total Solution Cost
IT departments no longer have unlimited budgets, if indeed they ever
did, so it is necessary for FANs to be implemented in a cost-effective
manner. The job of the designer should be to consider the total cost of
a solution, rather than looking only at a limited subset of the cost.
For example, some approaches to FAN design might save on hardware
cost, but offset that savings with an even larger increase in cost to the
business from downtime. Using non-HA design strategies is an example of this. Before deploying a non-redundant solution to save on cost,
be sure to consider the long-term impact of downtime to the attached
systems. Consider what the cost would be if a non-redundant design
were used to support a DR solution, and it failed during a disaster and
prevented restoration of business services. Even one incident like this
in a large-scale operation would more than cover the entire cost of
deploying several redundant FANs.
In most cases, designers have found that cutting corners on infrastructure to save short-term cost tends to increase the total cost of
ownership. Even when cost is a key requirement for the design team,
you need to look at all cost components, not just at any one component. Keep this in mind when evaluating network topologies and toplevel HA architectures.
As with other network design problems, the first step in designing a
long-distance FAN is to figure out its requirements. In general, a
designer needs to collect requirements such as:
Distance to be supported by each link
Reliability and availability
Security requirements
The remainder of this section discusses some of these considerations
when you are evaluating specific long-distance design problems.
Introducing File Area Networks
Chapter 4: Design Considerations
General Distance Considerations
Availability of WAN resources. Can the FAN requirements be met with
the WAN resources already in place? How will that impact the existing
usage of the network? Will the WAN be able to support peak usage,
such as during a restore operation?
Budget vs. value of data. It will cost money to implement an effective
solution. How does this cost compare to the potential loss of data and
application availability if a disaster strikes? If a solution is less expensive to implement initially, will it be less expensive to maintain over
time? Running CIFS over a WAN without an accelerator will be less
expensive initially, but will also be harder to manage and troubleshoot
on an ongoing basis.
Which MAN/WAN technology to use depends on a number of factors,
such as:
Availability of Service. Is there a provider that can deliver service at
each site? For example, if dark fiber is not available between sites,
then another technology will need to be used no matter how well dark
fiber otherwise would have met the requirements.
Application RAS Requirements. For applications that require high reliability, availability, and serviceability, any of the technologies could be
employed provided that appropriate Service Level Agreements (SLAs)
are in place and only enterprise-class components are used.
Application Performance Requirements. Many applications are sensitive to delay and error rates on their storage devices, while others are
less so. Performance on hosts running synchronous mirrors over distance will be severely degraded unless WAN performance is best-inclass, whereas asynchronous mirroring applications can usually tolerate more delay and a higher error rate.
Distance Between Sites. Some technologies are inherently limited to
MAN and shorter WAN distances, such as dark fiber and xWDMs. Others can support long distances, but not without incurring delay and
loss that may impact applications. SONET/SDH and ATM tend to be
good fits for very long distances.
Solution Cost. How much does each service option and network infrastructure cost, both initially and on an ongoing basis? For example, if
you have an application that would benefit from SONET/SDH, but only
have half the budget necessary to deploy it, another solution is probably more appropriate.
Introducing File Area Networks
Data Migration Considerations
In most cases, when you are planning data migration, the data being
moved is not “live” at the time of its movement. If it is live, it is being
served by a local mirror copy while it is being replicated to a remote
site. Because of this, data migration tends to be tolerant of failures,
performance issues, and errors compared to other extension solutions. The main factors to consider are how long the migration is
allowed to take and how well the production environment is insulated
from the migration project. For narrower migration windows, Brocade
StorageX can be used to accelerate the process by moving most of the
data before the “window” needs to be opened. Other FAN products can
help in similar ways.
Disaster Recovery Considerations
For distance extension solutions intended to solve DR or similar problems, the following points should be considered during the planning
Application critically. Which file systems require protection? Critical
file systems should be protected with fully redundant HA architectures.
Acceptable recovery time. The maximum time an application can be
unavailable. Decide the best mechanism for fast recovery in each scenario. If the data set at the primary location is corrupted, will you copy
the data back over the WAN and restore the application at the primary
site, or will you use a standby server at the recovery location? For scenarios in which you need to copy data over the WAN, you need to
calculate how much time the copy can take. The performance of the
infrastructure between sites will need to support this. As with data
migration, Brocade StorageX and other FAN applications can help to
accelerate this.
Acceptable data loss. How much data can be generated at the primary data center before it is copied to a remote site? In the event of a
disaster or even small-scale failure at the primary data center, any
data not copied to the remote site will be lost. This may dictate a synchronous versus an asynchronous solution. If the application cannot
tolerate any data loss at all, synchronous solutions are needed. This in
turn dictates a high-performance inter-site transport, such as Fibre
Channel, xWDM, or SONET/SDH.
Introducing File Area Networks
Chapter 4: Design Considerations
Location. The distance between data centers should be great enough
so that at least one site will survive an anticipated disaster (threat
radius considerations). The distance between sites may dictate technological options for the WAN. For example, if the sites are located
halfway around the globe, WAFS over IP may be a viable option,
whereas FC SAN extension might not work at all. If they are located a
few hundred kilometers apart, then native FC or xWDM could be a better choice.
Testing. The DR strategy must be routinely tested, or it may not work
when needed. Before implementation, decide what testing is needed,
and how and when the tests will be carried out. It is also important to
evaluate how testing will affect production operations.
Implementation and Beyond
Once the design is complete, in many cases the job of the designer will
be over. In most large-scale environments, the jobs of designer, implemented, and manager will be distinct. However, there are areas in
which the designer can either facilitate or hinder subsequent tasks.
FAN designers should consider how the solution will be installed and
managed on a day-to-day basis. Indeed, FANs are often deployed specifically because they can improve management tasks. These
generally include monitoring the health of the network, and performing
adds, moves, and changes to the FAN itself and to the attached
devices. It may be useful for the designer to work with whatever personnel will end up managing the FAN and its component parts to plan
a maintenance strategy, listing all day-to-day tasks and determining in
advance how the FAN will affect these tasks.
The following section discusses some of the items related to implementation you should keep in mind while creating a design, and that
may need to be included in the project planning documentation.
Rack Locations and Mounting
Before committing to a specific design, make sure that suitable locations exist for new equipment. This means making sure that the right
kinds of racks are available, and that their locations are appropriate.
Different devices have different rack heights, depths, structural integrity needs, weights, and even widths. For obvious reasons, it is
important to ensure that the racks intended for mounting the equipment have the same characteristics as the equipment to be mounted.
This may require purchasing new racks, lifts, or cabinets specifically
for the project.
Introducing File Area Networks
Implementation and Beyond
It is also important to match the airflow of devices racked together in
the same or adjacent racks. Some devices have front-to-back airflow;
others have back-to-front designs, and still others use a side-to-side
design. If the rack configuration requires putting dissimilar kinds near
each other, then one or the other type may need to be rack mounted
backwards, so that they all pull cold air from the same side of the rack,
and vent hot air together as well. Otherwise, the exhaust fans from
each set of equipment will blow hot hair into the intake ports on the
other set, causing all of the equipment to overheat. Figure 15 shows
what happens when equipment is mounted incorrectly.
Figure 15. Bad rack implementation (airflow problem)
It is also best to avoid creating single points of failure in the physical
layout of the FAN, especially if resilient or redundant designs have
been used. The point of deploying redundancy is to avoid single points
of failure, and deploying, for example, redundant Brocade StorageX
servers within the same rack makes the rack itself into a failure point.
The best-practice is therefore to separate resilient elements into different racks, and to power the racks in such a way that a power failure
does not cause either to fail. Redundant systems should ideally be
Introducing File Area Networks
Chapter 4: Design Considerations
located in entirely different rooms. The degree of separation that
would be recommended depends on a number of factors, such as the
size of the FAN, the difficulty of cabling to different areas, the impact
to the organization of a failure to the entire FAN at the same time, and
so on. It comes down to balancing the risk of a simultaneous failure
caused, for example, by a fire suppression system going off overhead,
versus the cost of distributing the components to different locations.
The designer should ensure that the racks targeted for equipment
Physically accommodate the equipment
Support the HA architecture of the FAN
Provide correct airflow to cool the equipment
Power and UPSs
When designing a FAN with availability in mind, it is important that the
duality be employed throughout the network design and the implementation. For HA designs, ensure that the data center has separate power
inputs available, and that redundant components are connected to different power supply units and power grids.
Staging and Validation
Prior to transitioning a FAN solution to production, it is important to validate that the solution really works. The best way to do this is to build it
in a separate environment, usually called a “staging configuration,”
and run tests over it to ensure that it satisfies performance and availability expectations.
Once the FAN products are staged, the implementation team will need
to inject faults into the system to verify that the FAN itself and all
related devices are capable of recovering. The next step involves generating an I/O load that approximates anticipated application I/O
profiles. Finally, the team will want to run an I/O load while also injecting faults to approximate a worst-case scenario, that is, a failure in the
FAN while it is in production.
The designer is rarely responsible for personally executing this process
in a large-scale environment. However, the designer will be in a good
position to tell the implementation team how to design the I/O profile
that will run across the network(s), and what kinds of faults are
expected to be recoverable.
Introducing File Area Networks
Implementation and Beyond
Release to Production
Broadly, there are two categories of rollout for FANs:
“Green field” deployments, where the file network is built from the
ground up
Installed base upgrades, where existing file servers are enhanced
by adding new software and equipment
Green Field Rollout
This is the easiest scenario to discuss, since there is no risk to production applications during the rollout phase. For most, the sequence for
a green field rollout is as follows:
1. Create a design and documentation set.
2. Create a high-level rollout plan, describing the phases below and
giving schedule targets for each.
3. Source and acquire equipment and software. This may be done all
at once, or in a phased manner.
4. Begin to deploy and test the FAN. Build out the core infrastructure
first, so that the overall structure of the network does not need to
keep changing during the rollout.
Installed Base Upgrade Rollout
Most FAN deployments seem to fall into this category. Adding new network equipment to an existing environment can be as simple as
plugging it in and turning it on, or as complex as building a dedicated
test environment and running months of stress tests prior to production use. It depends on site-specific change control procedures, the
degree of confidence the network administrators have in the new
equipment, and the impact to the company that would result from a
failure during the rollout.
For sites with extremely tightly controlled environments, release to production may resemble a green field deployment: the new solution
could be built in an isolated environment and production applications
migrated over to it only once it was proven to be stable.
Introducing File Area Networks
Chapter 4: Design Considerations
Day-to-Day Management
There are a number of ways in which the designer can simplify ongoing
management tasks. For example, using a redundant and resilient
architecture simplifies upgrading to newer technologies, which will
inevitably be adopted over time. Also, as part of the design and implementation processes, the designer should create a document
repository. If the design is well-documented, and maintenance procedures are created during the process, management will be easier later
The designer should create a configuration log, which explains how the
FAN is configured and why it was configured that way. It should serve
as a record for change management processes: if a change is made
and something stops working, the administrator can refer back to this
log to find out what changed and why. The designer should ensure that
procedures require administrators to update the log whenever
changes are made to existing configurations or if switches are added
or removed.
Planning for Troubleshooting
Like all other networking technologies, from time to time it will be necessary to troubleshoot a FAN, and like any other network, this process
means finding out which piece of hardware, software, or configuration
is malfunctioning. The designer is rarely responsible for performing
troubleshooting, but there are ways in which the designer can simplify
the process.
For example, it is generally easier to troubleshoot a small network than
a large one. If the designer chooses a hierarchical design in which the
FAN is broken up into smaller “mini-FAN” solutions, then failures will
be contained within a smaller area. This makes it less likely that there
will be trouble later on, as well as making it easier to figure out what
went wrong if something does go awry.
Similarly, it is easier to troubleshoot FANs that limit the number of FAN
product interactions. If the solution uses Brocade StorageX, then
stacking FLM, WAFS, and MyView on top of it might not be easy to
debug. At least initially, it would be desirable to pick one FAN application and get that working before rolling out more.
Introducing File Area Networks
Chapter Summary
Chapter Summary
It is never possible to dictate laws of design for complex networking
solutions, and this chapter should not be taken as an attempt to do so.
Each deployment will have unique requirements, and unique valuations for resolving conflicting requirements. You should review this and
other data center literature, but also consult with the designers who
work for your support provider for insight into the best solution for your
requirements and environment.
Introducing File Area Networks
Chapter 4: Design Considerations
Introducing File Area Networks
Brocade StorageX
Brocade StorageX is designed to solve a variety of real-world IT management problems. “Chapter 3: Building Blocks” starting on page 31
discusses the product at a high level. This chapter provides more
detailed information about the product architecture, as well as providing real-world advice on deployment and management in the following
“Deployment Examples” on page 75
“Product Architecture” on page 82
“Data Migration Tasks” on page 98
“Data Migration Methods” on page 99
“StorageX and NetApp Storage Device Integration” on page 104
“Troubleshooting” on page 109
“Chapter Summary” on page 111
Windows and UNIX environments both have operational challenges
where file management is concerned. StorageX is designed to alleviate
many of these. For example, in a Windows environment, StorageX
helps mitigate or eliminate entirely these persistent management
To retrieve files, users must know the physical location of the data.
Mapping a single drive letter to an individual share scales poorly.
Limitations on the capacity and performance of storage devices
are created.
Limits the amount of data that can be stored inside a single share
or mount point.
Introducing File Area Networks
Chapter 5: Brocade StorageX
User group data is spread across multiple shares, servers, and
locations to accommodate size and access needs.
User training is required from both administrators and co-workers
about configurations and changes.
Users are required to make concessions for administrators
because of the lack of transparency during server issues,
upgrades, or moves.
Administrators must manage complex logon scripts to provide
multiple drive mappings based on client needs.
Administrators must change user objects to reflect changes to
user home directories.
Administrators must balance the downside of backup and restore
issues related to volume size versus the downside of changing the
user environment.
Administrators must justify the introduction of new technology
based on its benefits with the added cost of impacting the user
Administrators must complete migrations on the weekend or after
hours to limit the disruption to users.
Brocade StorageX is an open, standards-based software platform that
can be seamlessly and non-disruptively introduced into an IT infrastructure. The software runs on any industry-standard server running
Microsoft Windows 2000 or later, and does not require the deployment of a new hardware device. Because it is not in the data path,
StorageX does not introduce any performance or latency issues when
used to create and manage the GNS. It uses existing file systems,
which means that administrators are not required to change their network operating procedures to enjoy the benefits of StorageX.
StorageX also integrates with the existing network security framework
and administrators can use security settings such as group permissions to automatically create and populate a GNS. No software or
agents are required on the client machines accessing the namespace;
and unlike many GNS solutions, StorageX does not require the introduction of a new protocol on the network.
Introducing File Area Networks
Deployment Examples
Deployment Examples
This section illustrates some of the more popular use cases scenarios
for Brocade StorageX— the problem faced by users or administrators
and then how StorageX could be deployed to solve the problem.
Single User Drive Mapping Proliferation
Windows clients typically map drive letters to network shares based on
the physical location of the data. UNIX clients have a similar mechanism using mount points. Each file server or NAS appliance gets its
own drive letter on each PC. Users are often confused about which
drive contains a particular file, and needlessly waste productivity
cycles searching for their data, as shown in Figure 16.
Figure 16. Too many drive mappings
Introducing File Area Networks
Chapter 5: Brocade StorageX
The StorageX GNS provides an elegant solution to this dilemma. It creates a single drive mapping with a logical view of the users’ files below
it, as shown in Figure 17.
Figure 17. Drive mappings consolidated
Consolidating drive mappings using StorageX:
1. User has five mapped drives to data located in different locations
on the network (top window).
2. StorageX is used to create a link to each of these shares (middle
3. User can now see the same data using one mapped drive (bottom
Introducing File Area Networks
Deployment Examples
Multi-Department Drive Mapping Inconsistency
Without using a GNS, different departments usually have drive letters
mapped differently. This can create confusion when IT personnel are
attempting to troubleshoot issues. If an engineering user calls the help
desk and says that there is a problem with the “E:” drive, that user will
be referring to something radically different than a marketing user who
logged exactly the same problem statement with the help desk. This is
illustrated in Figure 18.
Figure 18. Inconsistent mappings
Without StorageX, it might be difficult to solve this problem. Indeed, it
might be impossible, since there are only 26 letters in the alphabet
and there could be many more than 26 required drive mappings.
With StorageX, the solution is simple. By using a GNS, the view of the
whole environment from the perspective of the end user can be consolidated under one mapped drive, and this drive letter can be the
same for all departments. The differences in view are at the file/folder
level, and are centrally managed by IT. If a user calls in with an E: drive
problem, it means the same thing no matter what department they are
in. IT can then easily determine whether the issue is with the entire E:
Introducing File Area Networks
Chapter 5: Brocade StorageX
drive or with specific folders. If the issue is with folders, the central
StorageX interface can determine which physical file servers are associated with the dysfunctional folders.
Drive Mapping and Communications
Another issue with having different mappings for different users or
departments relates to interdepartmental communications. For example, let’s say that one user e-mails another with the location of a file.
This will only work if both users view the file the same way.
Figure 19. Inconsistent mappings and communications
Typically, users resolve this by attaching the file to a message, as
shown in Figure 19. This can be challenging and inefficient in a number of ways. For example, it needlessly eats up network bandwidth and
storage capacity on e-mail servers. With StorageX, all users can have
the same drive letter mapping. Assuming that the users in the previous
example have appropriate permissions, the transmitting user would
simply need to communicate the subdirectory path, rather than attaching the file.
Introducing File Area Networks
Deployment Examples
Infrastructure Changes
If drive letter mappings or UNIX mount points are tied to a physical
location and that location changes, then each and every client will
need to be updated. This can occur, for example, when a storage
server is upgraded to a higher-capacity system.
Figure 20. Migrating to higher-capacity storage
StorageX solves this problem by creating a replica of the data on a
new, larger volume and then changing the link to point to the new copy
of the data. Using the StorageX Expand command creates a new link
under the top-level link automatically. StorageX moves the data and
redirects users to the new location, and there is no need to reconfigure a single client.
Storage Optimization
StorageX can be used to optimize capacity utilization across the GNS.
In the scenario shown in the figure below, in which there is a GNS managing marketing and sales data. There are four physical devices
storing the data at varying degrees of utilization. In the Marketing pool,
Mktg 1 is running at 98% capacity, while Mktg 2 and Mktg 3 are each
under 5%. Figure 21 shows what the environment would look like after
StorageX is used to optimize these resources.
Introducing File Area Networks
Chapter 5: Brocade StorageX
Figure 21. Optimizing capacity utilization with StorageX
The administrator can use the load balancing policy within StorageX to
automatically detect capacity levels. When StorageX detects the imbalance, it can migrate folders away from Mktg 1 and distribute them
across Mktg 2 and Mktg 3. Because the GNS is automatically updated
to reflect the new file locations, this entire process is seamless and
non-disruptive to the end users accessing the folders.
When a user clicks on a folder that was migrated off Mktg 1 onto one
of the other devices, he or she is automatically redirected to the new
file location without knowing it has been moved.
Similarly, the load balancing policy can be used to seamlessly expand
the physical storage devices in a pool. In the example above, the
Sales 1 device is running at 84% capacity. Using StorageX, the administrator can add Sales 2, and automatically balance the Sales folders
across the two devices without disrupting end user access to their
Introducing File Area Networks
Deployment Examples
Some of the benefits of capacity balancing are:
Automated, policy-based setup and execution of load balancing
allows for unattended storage optimization.
The GNS is automatically updated to reflect the data migration, so
that it is transparent to the end user.
Multiple policy settings allow administrators flexibility in choosing
capacity thresholds for storage devices.
Data Lifecycle Management
Data lifecycle management solutions solve budgetary problems within
IT. StorageX can be used to create a tiered storage architecture to
automate this category of solution. Policies can be scheduled to automatically migrate files as shown in Figure 22.
Figure 22. Data lifecycle management concept
When files are migrated based on administrator-defined criteria such
as last access time or age, the GNS is automatically updated to reflect
the new location of the files. When users access a file after it has been
moved, the GNS automatically sends them to the new location.
Introducing File Area Networks
Chapter 5: Brocade StorageX
By using the data lifecycle management features of StorageX, administrators can reduce the overall cost of storage by placing critical data on
premium storage devices, and as data becomes less critical to the
organization, automatically moving it to less expensive storage
Product Architecture
StorageX is a robust set of applications built on the GNS that can unify
heterogeneous storage devices and provide consistent data services,
regardless of the underlying hardware. Figure 23 shows the primary
applications for StorageX in different areas of the IT infrastructure.
Figure 23. StorageX applications
StorageX can be used to deliver replication, migration, consolidation,
and failover and lifecycle management services across the enterprise-spanning remote sites, data centers and DR sites. The product architecture is highly scalable and flexible, and solves a wide variety of
problems related to single data center activities, branch office administration, and disaster recovery sites.
This section discusses the components of StorageX, and some of the
elements of its architecture that allow it to provide those functions.
Introducing File Area Networks
Product Architecture
Client/Server Design
Brocade StorageX was designed using a distributed client/server
architecture as shown in Figure 24. This provides a flexible and efficient means to manage storage.
Figure 24. Client/server architecture
The architecture includes the following components:
StorageX Client. The client provides the StorageX administrator interface and communicates with the StorageX sever, as well as with the
DFS Root(s).
StorageX Server. The server acts on behalf of the client to carry out
policies and to maintain configuration and reporting information. It is
implemented as an NT service.
StorageX Monitoring Agent. The agent allows distributed processing of
policies and facilitates HA deployments. It is also implemented as a NT
Introducing File Area Networks
Chapter 5: Brocade StorageX
StorageX Replication Agent. The agent carries out file transfers for
StorageX policies. StorageX provides Windows and UNIX replications
Data migration and consolidation are two of the innovative features of
Brocade StorageX. Migration and consolidation are no longer point-intime IT projects for administrators. Rapid changes in hardware, technology, and capacity demands are forcing enterprises to implement
ongoing migration strategies to optimize available assets. However,
consolidation presents many challenges for IT administrators. These
include: minimizing client downtime, supporting ongoing data migration practices with minimal IT resources, consolidating data in a
heterogeneous environment, and maximizing Return On Investment
(ROI) by effectively utilizing storage resources across the enterprise.
The key feature of the StorageX implementation is its use of a global
namespace. Using a GNS to facilitate data migration can result in significant time and cost savings. IT administrators who need to migrate
from Windows NT 4 to Windows 2000/2003; Novell to Windows; Linux
to Solaris; or Windows or UNIX file storage to NAS, should consider
deploying a GNS to facilitate the migration. It creates a virtual pool of
storage and enables administrators to readily identify migration
requirements across platforms. The concept and benefits of GNS is
further discussed in “Chapter 1: FAN Basics” starting on page 1.
GNS Deployment
Brocade StorageX provides administrators with reusable policies that
control key data management and data movement activities. Policies
are a key component of the StorageX architecture, as they save considerable time in implementing a GNS, and also simplify ongoing
management tasks. Policies for initial deployment of the GNS are conveniently managed using the StorageX Namespace Creation Wizard,
as shown in Figure 25. Similarly intuitive interfaces are provides for
managing other policies on an ongoing basis.
Introducing File Area Networks
Product Architecture
Figure 25. Namespace Creation Wizard
The StorageX GNS is built on the Microsoft Distributed File System
(DFS), which is intended for use as a service that assists in managing
unstructured data. This unstructured data consists of objects such as
Microsoft Office documents, spreadsheets, and PDF files.
StorageX gives an administrator a rich tool set for GNS management. It
allows an administrator to create and manage multiple namespaces
through a single, user-friendly console and provides a means to manage not only the GNS itself, but also the underlying file system.
Introducing File Area Networks
Chapter 5: Brocade StorageX
Figure 26. StorageX administration console
Using StorageX, administrators can populate the namespace on-the-fly
from an existing share or export. StorageX can also build a namespace
based on the file security settings used in a pre-existing environment.
It can provide the administrator with the ability to monitor, scale,
increase availability, audit, and back up, restore and snapshot
The DFS substructure within StorageX requires remarkably little management in most environments. However, StorageX does give
administrators the tools to manually create:
Logical Folders
Single Links
Single target
Multiple targets
Multiple Entries
Create multiple links
Create folder structure
Introducing File Area Networks
Product Architecture
Figure 27. Creating a DFS structure
Replication Technology
Replication technologies are mature in the marketplace, and it is common to see vendors of traditional replication solutions repackaging
their products to fit the current needs of the enterprise. However, many
of the technologies were designed for specific applications and infrastructures that have evolved considerably since the introduction of the
replication solution. Brocade StorageX Byte-Level File Differential Replication (BFDR) is designed specifically to address the current data
protection priorities of the enterprise, so it is not a repackaged tool
with historical roots.
Various replication technologies are suited for specific business applications storage platforms and network infrastructures. For example,
databases and mission-critical data require real-time protection (that
Introducing File Area Networks
Chapter 5: Brocade StorageX
is, continuous replication), compared with the typical files created by
users and applications that can be replicated at periodic intervals and
may not require real-time protection.
While no single replication technology fits all business objectives well,
corporations increasingly seek comprehensive solutions to avoid having to buy multiple point solutions. In general, software-based
replication is more flexible, because it does not lock the IT department
into a specific proprietary hardware platform. It is often true however,
that hardware-based replication is more reliable and delivers higher
performance. Figure 28 shows various replication technologies and
their associated application layers.
Figure 28. Replication technologies
There are different kinds of replication technologies available, but they
all require some kind of replication engine, that is, a software or hardware product that actually moves data around. This engine can be
located on either the source or destination side, or it can be located
between the two.
Introducing File Area Networks
Product Architecture
Figure 29. Replication engine locations
StorageX can be deployed using any of these models; it offers two
types of replication:
File-level replication
Byte-level replication
Introducing File Area Networks
Chapter 5: Brocade StorageX
File-Level Replication
File-level replication involves moving an entire file whenever a change
is detected. The replication agent (RA) can use two different methods
to copy the changed files: “Safe Copy” and “Copy-in-Place” (see
Figure 30.). The former method copies the file to a temporary location,
validates the copy, and then removes the old copy and renames the
new one. The latter method is faster: it simply overwrites the old replicated file with the new one. However, if there is a problem during
replication and a disaster occurs during the copy, data can be lost.
Because of this risk, “Safe Copy” is almost always the recommended
Figure 30. Data integrity options for replication
File-level replication is the simplest method, but not the highest performing when replicating large files that receive small updates. The
issue is that file-level replication requires moving data that has not
changed, as well as the data that has changed. Byte-level replication is
more complex, but it addresses this limitation.
Introducing File Area Networks
Product Architecture
Byte-Level Replication
StorageX -Byte-Level File Differential Replication (BFDR) replicates
files periodically between devices using byte-level file differencing.
BFDR initially makes a full copy of the source on the target device,
including all file attributes and security settings. Following the initial
replication, StorageX automatically replicates only the compressed
byte-level changes to the file. BFDR uses a multi-stage process of analyzing, comparing, and updating files, as illustrated in Figure 31.
Figure 31. BFDR methodology
In cases where small portions of large files are changed, BFDR can
provide reductions in excess of 90% of the amount of data transferred
(and therefore bandwidth utilized). This is particularly useful when replicating data across a WAN as part of a DR solution.
Introducing File Area Networks
Chapter 5: Brocade StorageX
Historically, getting optimal WAN replication performance required proprietary hardware. Since StorageX is an open application, it is possible
to use it to replicate data between radically different subsystems, as
shown in Figure 32.
Figure 32. Heterogeneous replication
Introducing File Area Networks
Product Architecture
Replication Management
Managing replication is largely automated within Brocade StorageX.
During initial setup, an administrator configures the replication
requirements by using the console. StorageX provides administrators
with control over aspects and attributes such as security, bandwidth,
scheduling, and replication protocol (file or byte-level file differencing),
among others. The overall architecture of replication management is
shown in Figure 33.
Using StorageX, administrators can:
Run reports detailing status and configuration
View reports detailing status and configuration
View event logs showing history of operations
Figure 33. Managing replication
Introducing File Area Networks
Chapter 5: Brocade StorageX
The replication manager maintains configuration information in its
data store and is responsible for the ongoing evaluation of storage policies. At scheduled intervals configured in the policy, the server
initiates transfers and assigns them to the replication agent for processing. Based on policy configuration, the server also invokes userprovided scripts for custom processing operations upon initiating the
transfer. The StorageX replication agent carries out the transfer,
reporting back status and results to the replication manager, which
then makes this information available in the StorageX console.
In carrying out the operation, the replication agent:
Validates the source and destination
Determines the files or file changes to be sent
Scans for potential problems and reports if necessary
Performs the transfer operation as directed
Retries operations as needed if it encounters errors
Reports results and statistics to the StorageX server
Brocade StorageX can fail over between multiple sites within minutes,
as compared to hours or days typical with other replication products.
This is possible due to the seamless integration of BFDR with GNS.
Introducing File Area Networks
Product Architecture
Optimizing Replication
Replication can be optimized for performance (speed) or security (data
integrity). Table 2 shows parameters that can be modified within StorageX and their impact on replication optimization.
Table 2. Optimizing replication for performance or security
Optimize for speed
Optimize for security
Copy-in-place is enabled
Copy-in-place is disabled
“Retry failed file opens…” is
“Retry failed file opens…” is set to
default when default is not zero
“Abort if no file is successfully
opened…” is disabled
“Abort if no file is successfully
opened…” set to default when nonzero
“Event Details set to “Don’t list files”
“Event Details set to “List only files
with errors”
“Delete orphaned files in
destination folders” is disabled
“Delete orphaned files in
destination folders” is enabled
“Preserve last access time” is
Preserve last access time” is
“Allow loss of additional file
streams” is disabled
“Copy security descriptor” is set to
“Every time the file or folder is
In most environments, it is desirable to optimize for security. It is rarely
desirable to replicate a file quickly but incorrectly. However, there are
occasions when speed may be paramount, and administrators can
use these settings to speed replication processes if necessary.
Brocade StorageX can use the NTFS Change Journal to determine
which files are to be replicated rather than scanning the entire source
to detect changes. This is another way of optimizing performance, and
has no security-related downside. Of course, this works only on NTFS
file systems, but it does offer time savings when you need to replicate
large directory trees with limited changes scattered throughout the
directory hierarchy.
Introducing File Area Networks
Chapter 5: Brocade StorageX
Snapshots are another performance enhancing feature without security risk. StorageX takes a snapshot or makes shadow copies of the
volume in which the source data is located, and then uses the snapshot as the source. Doing this reduces the possibility of not being able
to transfer source files that are held open for exclusive access by an
external application at the time of replication. Without using snapshots, StorageX would have to retry the file or give up on replicating it.
When copying files without snapshots, administrators have the choice
to select on of the following options:
Retry failed file opens x times at y second intervals. Set how many
times and how often (in seconds) StorageX should attempt to open a
destination file.
Abort if file is not successfully opened in x minutes. If files cannot be
opened within a specified time, automatically stop the replication
Replication Cleanup
When a file is deleted from primary storage, there must be a mechanism to remove the file from the replicated copy. However, it is not
usually desirable to have this be an automatic mechanism. For example, a network-bourn virus might attack a network file system by
deleting all of its files. It would not be helpful to have a replication
engine propagate that malicious behavior by automatically deleting
the files from the replicated copy. StorageX provides the “delete
orphaned files on destination folder” option. Running this manually
ensures that the source and destination have identical data, without
allowing rogue processes to perform similar actions without administrator consent.
Introducing File Area Networks
Product Architecture
Figure 34. Deleting orphans
Introducing File Area Networks
Chapter 5: Brocade StorageX
Data Migration Tasks
Brocade StorageX migration usually involves the following tasks:
Create a baseline copy of the source data.
Create incremental copies of changed data.
Connect the users to the migrated data on the destination servers. This is accomplished by simply updating the links to point to
the destination shares as long as the users are already accessing
the GNS.
Back up the source data.
Rename of all the shares on the old source servers so users cannot get to them.
A final replication from the renamed old source server shares to
the destination shares.
Stop sharing the shares on the old servers that have been
Wait for and handle any user access issues. Once confidence is
high that access for the users is robust, then delete the data
under the shares on the old servers that have been migrated.
It is important to note that using StorageX for migration does not
require a GNS, though GNS migrations are always easier to perform.
To optimize performance when executing multiple simultaneous replications, note that each migration task is run as a separate thread.
Multi-core or multi-processor servers can be used as data movers for
better performance. By default, 20 threads are available, which is
more than sufficient for most applications. It is also advisable to turn
off virus scanning on the StorageX host during bulk migration tasks, if
the administrator believes it is safe to assume that the source volumes
are already “clean.” When performing multiple migrations in this manner, take care not to set up overlapping replication/migration jobs, as
this can have unpredictable results.
Introducing File Area Networks
Data Migration Methods
Data Migration Methods
Where data structures are concerned, there are two major categories
of data migration: migrations in which data structures change, and
migrations in which they do not change. While this might sound obvious, it is nevertheless important, since these two categories have
radically different challenges associated with them, as illustrated in
Figure 35.
Figure 35. Categories of migration: change or no change
This section discusses these categories and the associated operational differences in migration strategies.
Migrations Procedures with Unchanging Structures
When migrating data from one volume to one other volume and leaving share definitions largely alone, the fundamental structure of the
file system is unchanged. It might be larger or smaller than before, and
might be in a radically different location, and might even be in a different brand of storage enclosure. However, it is possible to address the
new enclosure in a fundamentally similar way both before and after
the migration.
Introducing File Area Networks
Chapter 5: Brocade StorageX
The major difference between this scenario and any scenario in which
data structures change is that a GNS is not, strictly speaking, required
in this scenario. There are still benefits to having a GNS in place, but
the migration can be performed in a relatively seamless manner even
without a GNS.
To perform this kind of migration without a GNS, server names need to
be updated by either one of these methods:
Decommissioning the source server, and using aliasing as a
mechanism for updating the physical path to the new destination
Decommissioning the source server, renaming the new server
with the name of the retired server, and then removing it from the
domain and having Active Directory create a new security identifier for the server
This method—not using a global namespace—is not recommended,
because it creates downtime for end users. Also, it is more than likely
that future migration tasks will not be as simple and therefore will
require a GNS. Implementing the GNS up front on a simple migration
task will make operations much easier when that day comes.
When a global namespace is used for a one-to-one migration, the process is transparent to the end user. The administrator performs the
following steps:
Create and configure a GNS if one is not in place.
Create a StorageX migration policy that copies shares from a
source server to the destination. Configure the policy to update
the GNS automatically.
Execute the policy. The policy will create new shares on the destination server, copy the contents, and update the namespace.
Once you confirm that the destination server is operational and the
environment is stable, you can decommission the source server.
Introducing File Area Networks
Data Migration Methods
Migrations Procedures with Changing Structures
Changing structures can mean going from one storage subsystem to
many, going from many to one, or radically changing the definitions of
shares even when you are going from one old subsystem to one new
subsystem. Figure 36 shows a migration with changing structures
between two arrays. The structural changes involve the way directories
and shares are laid out.
Figure 36. Example of changing structure migration
For migrations in this category, a GNS should always be used, nd the
process is transparent to end users
Create a GNS if one is not already in place.
Create a StorageX Migration Policy that copies share(s) from a
source server to the destination server. Configure the policy to
restructure the data in any way required. Also configure the policy
to update the GNS automatically.
Execute the policy. The policy will create new shares on the destination server, copy the contents, and update the namespace.
As before, once you confirm that the destination server is operational
and the environment is stable, you can decommission the source
Introducing File Area Networks
Chapter 5: Brocade StorageX
Migrations Procedures with Server Consolidations
Server consolidation is a special case of changing structure migration.
Administrators will want UNC paths to remain unchanged when the
underlying files are moved to other servers or to other paths. Paths
may be embedded in links, in applications, and in other places where
the names are difficult to change. The Brocade StorageX consolidation roots function keeps UNC paths the same when moving the
underlying files within a path. StorageX performs an aliasing function
and responds to requests for a server name for which it is aliasing. You
can use the procedure previously discussed, but in many cases, the
following procedure is preferable.
These are the steps to consolidate servers using this alternate
Disallow user access to the servers involved.
Create new DNS entries for retiring servers.
Rename the retiring servers to the name you specified in the new
DNS entry. For example, rename Server01 to Server01-old.
Update the DNS entry for the original server names that each of
the retiring servers were named, and point the DNS entry for that
host name to the server that now hosts the server consolidation
Remove the computer from the Active Directory since it is now an
alias and no longer an existing computer.
If applicable, update the Windows Internet Name Service (WINS)
database by removing any static entries that resolve the old server
Using StorageX, add the retiring servers to the Other Resources
folder in the Physical View.
Using StorageX, create a server consolidation root for each server
that will be consolidated.
For each consolidation root, create links to the shares on the servers that you want to migrate. For example, create links to shares
that you want to migrate from Server01-old. When users now
access \\Server01\share, they will be redirected to \\Server01old\share.
Introducing File Area Networks
Data Migration Methods
It is now possible to perform a normal migration to the new server.
Access to user files is restored and you can migrate the files transparently. This process is illustrated in Figure 37.
By using a GNS and server consolidation roots, it does not matter what
you want the new data structure to be; you can rearrange the data to
suit the needs of your organization.
NOTE: Using a server consolidation root should be considered a temporary measure. Once the servers have been consolidated, the new
consolidated server should be implemented into the global
Figure 37. Migration using consolidation root
Introducing File Area Networks
Chapter 5: Brocade StorageX
StorageX and NetApp Storage Device Integration
Brocade StorageX can integrate with virtually any NAS head or file
server, but some features have been optimized to work with NetApp
storage devices (filers). For example, NetApp filers can have some of
their element management tasks performed from within StorageX.
NetApp Filer in the Physical View tree
Integrated management of NetApp filer. StorageX provides integrated
access to the FilerView management tool. When a NetApp filer is
selected in the Physical View, the Manage tab hosting the FilerView
interface is added to the right pane.
Display of NetApp-specific information. StorageX displays information
from the NetApp filer, including Data ONTAP version, filer volume
names, filer system status information, and NetBIOS aliases in use on
the filer.
NetApp vFiler in the Physical View tree
Unlike a NetApp filer, vFilers do not display a Manage tab in the right
pane when they are selected in the Physical View. vFilers are managed
through their hosting filer.
NOTE: The displayed platform information for a vFiler is Data ONTAP
without any revision number; this is because vFilers do not support
StorageX uses a combination of RSH/SSH commands and file updates
to perform filer changes. The StorageX service account should be configured as an administrator on the NetApp appliance. RSH accounts
can be configured through the machines Properties dialog in the Shell
tab or on the Remote Shell page of Tools/Options. (See Figure 38 on
page 105.) Configuration information can be refreshed through the
NetApp filer tab and from the context menu.
Introducing File Area Networks
StorageX and NetApp Storage Device Integration
StorageX can also be used to configure volumes, qtrees, and shares.
This information can be found and manipulated in the Properties dialog. (See Figure 38 and Figure 39 on page 106.) StorageX provides a
hierarchical view of the volumes and qtrees under Physical View.
Figure 38. Configuring filer management in StorageX
Introducing File Area Networks
Chapter 5: Brocade StorageX
Figure 39. Configuring volumes on a filer
Introducing File Area Networks
StorageX and NetApp Storage Device Integration
There are a number of different ways to manage Snapshot from StorageX. Figure 40 and Figure 41 illustrate some of the possible use
cases, and describe some of their operational parameters.
Figure 40. Snapshot schedule initiated from StorageX
Introducing File Area Networks
Chapter 5: Brocade StorageX
Figure 41. Snapshot scheduling properties and NetApp filer
Introducing File Area Networks
The most common problems fall into the following three categories:
GNS management
This section provides guidance on resolving common issues in each of
those areas.
Installation Problems
The account used to install StorageX from media must have
administrator privileges on the local system.
The account used when running the StorageX console should be
an administrator on the local system and have the appropriate
permissions to administer the DFS roots that are managed.
The account used for the StorageX server, which is specified during installation, must have the following permissions and
Appropriate permissions to administer the DFS roots that are
Administrator privileges on any machines hosting either
source or destination data replications
”Run as service” privilege
In general, problems arise from lack of appropriate permissions to one
or more of the following:
DFS root
DFS root share
Active Directory
Root replica hosts
Machines where StorageX components are installed
Introducing File Area Networks
Chapter 5: Brocade StorageX
Namespace Problems
Service account needs admin permissions on the server.
Share permissions must give Full Control to the StorageX service
Service account needs admin permissions on DFS objects.
DNS forward and reverse lookup zones must be consistent. Incorrectly configured DNS entries can cause many problems, such as:
Incorrectly naming a server in Physical view
Failure to create links
Failure to create or manage roots
Test the DNS forward and reverse entries by using the nslookup
Refer to the domain-based root by its fully-qualified domain name
when attempts to access it by its NetBIOS domain name are
unsuccessful or result in inaccurate data.
Replication Problems
Check whether there are firewalls between the system hosting the
StorageX server or StorageX monitoring agent services and the
intended destination system for the replication agents that are
blocking the port where the StorageX Replication Agent service listens (TCP port 6002).
Check whether the console system or agent system is running a
personal firewall and whether the appropriate ports are open on it
to allow communication.
Check whether any process is already listening on the port the
StorageX server and StorageX replication agent services use.
Check that the system hosting the replication agent can ping the
system hosting the StorageX server using the server name.
The following are tasks to complete when the replication agent fails to
Check whether the system being deployed is up and reachable.
Check whether there are network routing and infrastructure
issues preventing network connectivity in both directions.
If the target system is running Windows NT 4, check whether
MSXML3 is installed on the system.
Introducing File Area Networks
Chapter Summary
Check whether the normal admin$ shares are published on the
destination system.
Check whether the account under which the StorageX server service is running has privileges to write to the destination admin$
share to install the replication agent.
Check whether there are any firewalls between the system hosting
the StorageX server or StorageX monitoring agent services and
the intended destination system for the replication agents that are
blocking the port where the StorageX server service listens (TCP
ports 6001, 6002 and 6005).
Check whether any process (such as Exchange) is already listening on the port the StorageX server and StorageX replication agent
services uses.
Chapter Summary
Brocade StorageX can facilitate 24×7 access to data through backups
and replicas across globally distributed locations. The StorageX GNS
can be used to seamlessly expand networked storage by making
expansion transparent to users. It increases administrator productivity
by automating data management tasks such as migration, consolidation, failover, and remote site management. It can work alone or
together with other Brocade products to simplify and accelerate DR
management in heterogeneous, distributed environments. The GNS
feature improves user productivity by simplifying access to distributed
data, and ensuring consistent views of networked storage across the
enterprise. StorageX should form the core of any FAN solution, and its
features will complement and enhance other deployed FAN products.
Introducing File Area Networks
Chapter 5: Brocade StorageX
Introducing File Area Networks
Brocade WAFS
Brocade Wide Area File Services (WAFS) technology provides reliable
LAN-speed access to remote file servers over WANs, while consolidating enterprise-wide storage for centralized management. “Chapter 3:
Building Blocks” starting on page 31 discusses the product at a high
level. This chapter covers the following topics:
“WAFS Business Case” on page 114
“Challenges to Centralization” on page 115
“Brocade WAFS Architecture” on page 117
“Availability and Integrity Benefits” on page 122
“Other Architectural Benefits” on page 125
“Edge Office IT Services” on page 125
“Data Reduction” on page 133
“Deploying WAFS” on page 137
“Chapter Summary” on page 144
Instead of requiring a dedicated file server at every remote location,
with the associated management overhead, users of Brocade WAFS
are able to deploy all storage in a central location. This chapter provides more concrete information about the product architecture and
provides real-world best practices on deployment and management.
Historically, WAN performance and reliability issues prevented a centralized solution from being a realistic solution. If file data were moved
from branch offices to central data centers, users at the remote sites
would no longer be able to work effectively because file access would
be too slow. WAFS centralizes primary storage while alleviating the performance penalties. A WAFS node replaces the remote file servers with
a high-speed caching system: when users request files, the WAFS
Introducing File Area Networks
Chapter 6: Brocade WAFS
product fetches the files using a WAN-optimized protocol, and maintains a local cache of each file for high-speed save operations and
subsequent open operations.
Brocade WAFS eliminates the need for remote backup, overcoming
barriers of latency, bandwidth, and network reliability to solve a
decades-old problem in distributed data management and application
performance. WAFS also addresses some of the world’s most problematic remote office IT issues, including storage and server proliferation,
backup and restore management, business continuity, and Total Cost
of Ownership (TCO) optimization.
WAFS Business Case
Without WAFS, organizations tend to create remote “islands” of data,
with complicated and difficult replication and backup processes. The
overall objective of WAFS is to enable server consolidation, and the
objective of consolidation is to move existing fragmented server environments toward a more secure, manageable, and adaptable
architecture. In other words, move data from the islands to a single
“continent,” which enables IT to respond rapidly and cost-effectively to
current and future business challenges and opportunities.
Scenarios where consolidation is relevant include:
Regulatory compliance-driven projects
Mergers and acquisitions
Significant changes in structure or processes
An IT group may drive a consolidation project in response to declining
effectiveness of its service in remote locations. For example, IT might
feel that it is delivering unsatisfactory service levels or unreliable systems in branch offices. Similarly, there might be long lead times to
make enhancements and perform maintenance at branch offices.
Simplifying branch office infrastructure can resolve these issues.
Storage and server consolidation can provide business benefits in a
number of areas:
Storage systems that are more cost effective, reliable, and
More consistent and appropriate IT support for corporate business
processes and goals
Improved data accuracy, consistency, and accessibility across the
Introducing File Area Networks
Challenges to Centralization
More effective use of IT skills and personnel.
Improved IT service levels
Increased value of IT systems
Challenges to Centralization
There are four primary challenges to centralizing WAN-based file and
application access. This section discusses those challenges and their
root causes—and suggests how WAFS can meet these challenges.
WAN Latency and Protocol Design
Saving, opening, or transferring a file across the WAN can take minutes. Sometimes, a disruption prevents the operation from successful
completion. The protocols involve thousands of individual round trips,
all necessary and all very fast on a LAN. Because of WAN latency,
access times on a WAN are two to four times longer than on a LAN.
Protocols such as CIFS, NFS, and TCP were never designed to handle
file access in high-latency environments. Every round trip generated by
the CIFS or NFS protocol over the WAN incurs a high-latency delay, dramatically impacting performance.
Lack of Bandwidth
Lack of network bandwidth may be one cause of unacceptable speed
for users accessing files and applications remotely. Branch office WAN
bandwidth from T1, DSL, or frame relay connections can be 10 to 100
times slower than LAN bandwidth. Competing for this limited bandwidth are a variety of applications—such as ERP, Voice over IP (VoIP),
Web applications, and file transfers. Adding Virtual Private Network
(VPN) and other network layers on top of the WAN can further reduce
WAN bandwidth.
Lack of Data Integrity
WANs are inherently less reliable than LAN connections and may be
subject to packet loss and network outages especially when constructed using satellite or microwave link technology. Files in the
process of being saved or opened can time out indefinitely, resulting in
data loss. Recently changed files also run the risk of being lost in
flight. User-derived workarounds further compromise data integrity
with a proliferation of out-of-date or unprotected files.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Residual Branch Office Servers
Certain branch office services must remain at the branch office.
Server and storage consolidation eliminates remote file servers,
remote backup hardware (such as tape drives), and backup software.
However, many other branch office IT services cannot be delivered
effectively over a WAN and may force this type of server to remain at
the branches, undermining the consolidation project. For example, IT
needs to continue to provide branch offices services such as DHCP,
DNS, domain controller, and print services, which cannot effectively be
served from the data center due to WAN latency. In addition, at each
branch office, IT needs management software, server-related operating system license/upgrade protection plans, and server-related antivirus software services.
Summary: Workarounds Don’t Work
Unfortunately, the ideal consolidation and collaboration scenarios,
which involve the need to share files in real time, do not take into
account the non-trivial technical challenges involved when remote
users try to access applications or open, save, or transfer files over a
WAN. The common issues of WAN latency and reliability, as well as
insufficient bandwidth, can translate into wait times of several minutes when users attempt to open or save even a basic Microsoft Office
file. Users experience even longer wait times when they try to access
larger files, such as those found in Computer-Aided Design (CAD),
architecture, and product design environments.
If a Microsoft Word document takes three minutes to open and five
minutes to save, and the relative lack of stability in the WAN environment keeps corrupting or causing applications to crash, users will
complain—and rightfully so.
And frustrated users usually come up with creative workarounds to
avoid these issues. However, these workarounds—which often involve
gargantuan e-mail attachments or copies of files stored in multiple
locations—cause even more problems. IT may look to invest in additional WAN links in an attempt to solve the performance problem with
additional bandwidth, but all of these workarounds still do not address
the root causes of application performance problems over the WAN.
Introducing File Area Networks
Brocade WAFS Architecture
Brocade WAFS Architecture
The Brocade WAFS solution gives remote users access to centralized
applications or centrally stored data over the WAN at LAN-like speeds
and lets IT organizations consolidate servers and storage. As a result,
costs are dramatically reduced and data integrity and protection are
significantly enhanced. With WAFS, data can be stored and maintained in one place: at data centers where storage can easily be
backed up and managed, putting an end to the challenges around the
management of remote office file servers.
Figure 42 illustrates the WAFS product architecture at a high level.
Figure 42. WAFS product architecture
Node Types
There are two different types of WAFS node: data center nodes and
branch office nodes.
Data Center Nodes
The WAFS Core node in a centralized data center communicates with
the main file servers and application servers, serving as a protocol
gateway for communications with branch office WAFS nodes deployed
at remote locations. It also maintains file locks for files being opened
through WAFS nodes anywhere across the organization, ensuring the
same level of protection for users accessing common files remotely or
at the data center.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Branch Office Nodes
The WAFS Edge node in the branch office accelerates access to data
center servers for branch users through TCP optimization and intelligent caching of the active data set being used from various central
application and file servers. To branch office desktop computers, the
node looks just like a standard local server that also provides key IT
services such as domain controller, print, DNS/DHCP, Web caching,
and software and application distribution (Microsoft SMS).
Core Technology
To accelerate file-sharing traffic, Brocade WAFS provides optimization,
while maintaining the data integrity required to deliver secure, accurate, and cost-effective server/storage consolidation via a distributed
file system. By virtue of being file aware, the distributed file system
optimizes CIFS/NFS protocols and supports end-to-end security mechanisms such as SMB signing or IP Security (IPSec). It also maintains
full availability and transparency for file read/write operations in case
of WAN disruptions and outages.
Unlike other WAFS products that force a tradeoff between performance and reliability, the Brocade WAFS distributed file system
technology incorporates a lock manager, journaling, and recovery. This
technology delivers maximum performance while protecting remote
users from stale data, write collisions, WAN outages, and system
This file-aware foundation extends CIFS and NFS shares over the WAN
transparently, preserving all file-sharing semantics. Local data center
and branch office users actually use the same share/lock information
maintained by the data center file server, exactly as if all users were in
the same physical building.
System-wide coherency and consistency are guaranteed, not only
across files accessed at any WAFS node, but across the entire enterprise, including files accessed directly on the data center file server.
This is possible because the nodes are aware of the location, nature,
and status of all files. A distributed file system runs between the WAFS
Core and Edge nodes, which uses the data center file server as the
authoritative arbiter of a file’s locked status.
Introducing File Area Networks
Brocade WAFS Architecture
Figure 43. Exemplary WAFS deployment
The following description of the flow of operations explains how the
distributed file system guarantees data integrity in unreliable WAN
environments. It represents the workflow that is triggered when a file
open request is issued by a branch office user through the WAFS Edge
WAFS Edge. If the file is cached locally, then a signature hash of the
file is sent along with the file open request through the WAFS Transport Protocol to the data center WAFS Core node.
WAFS Core. When a signature hash is received, the differences
between the file stored on the branch WAFS node and the version on
the data center file server are computed, and if there are differences,
the entire file is prepared to be sent as a response. The response is
compressed and streamed to the branch WAFS node.
A CIFS lock request is made to the data center file server, and if successful, a lock is placed on the file on the file server. The file lock
result is notified in the response to the branch WAFS node.
WAFS Edge. The changes are applied to the file on the local cache
store, and the lock result is recorded. The file is ready to be read by the
branch office user, and is given read-only or read-write access depending on the file lock result.
The distributed locking infrastructure highlighted above ensures that
file updates made by one user will never conflict with updates made by
another. Updated files are always available to all users across the net-
Introducing File Area Networks
Chapter 6: Brocade WAFS
work. All file changes are synchronously logged to persistent storage at
the remote office, guaranteeing data integrity—even during WAN disruptions, power outages, and reboots.
Performance Architecture and Benefits
Full Metadata Caching
The distributed file system implementation for Brocade WAFS ensures
that a true on-disk hierarchy is maintained on the nodes for CIFS and
NFS optimization. Full metadata consisting of file attributes, permissions, and security Access Control Lists (ACLs) are stored along with
the path names of the file objects. This metadata caching ensures
faster directory/folder browsing experiences even for extensive hierarchies where tens of thousands of file objects are stored at a single
directory level.
The distributed file system ensures that metadata information for file
and directory objects are always kept up-to-date with the file servers at
the data center through synchronous metadata updates. A check is
made against the file server before performing an operation to create,
delete, or rename files on the WAFS Edge node.
Full Asynchronous Write-Back
When an application write request is initiated from desktops to servers
through a WAFS node, the node has to ultimately send the write to the
file server via the data center/Core node.
Depending on the architecture of the WAFS system, there are two ways
to handle write requests:
Write-through—The node sends the write to the file server, waits
for a response, and acknowledges the write to the application.
Write-back—The node locally acknowledges the write and then
sends the write in the background to the file server.
In the case of WAFS nodes, write-back is implemented by having the
node acknowledge writes to the application without first sending the
write to the file server. Writes are streamed in the background to the
server. This technique hides latency from the application, leading to
substantial improvements in performance, but introduces time windows where the application “believes” that the data is written to the
file server, but in reality the data has not been committed. Traditional
write-back inherently imposes a tradeoff between performance and a
potential compromise on data consistency and safety. In order to make
write-back implementations safe, significant enhancements, such as
Introducing File Area Networks
Brocade WAFS Architecture
sophisticated logging and recovery techniques, need to be in place to
guarantee safe recovery in case of WAN disruption. This type of solution is called “asynchronous write-back.”
Brocade WAFS overcomes traditional write-back issues by implementing full write-back through the use of logging and journaling
techniques, technology that allows recovery in case of network disruptions, crashes, or reboots of the host operating systems. File-aware
WAFS systems manipulate and cache the entire file, inclusive of metadata. They maintain the identity of a file rather than just the handle, so
even if the handle is lost, the file identity and location are maintained
by the file cache, and the file is always recoverable, as illustrated in
Figure 44.
Figure 44. Write-back file locking architecture
Additionally, in Brocade WAFS, a file system-based implementation
ensures that the session context used to modify the file is always available to transfer file updates to the file server at a later time.
Furthermore, as long as there are pending writes on the branch office
node, the file remains locked on the data center, so that users cannot
make conflicting updates to the file. The system can always propagate
the modified file to the file server in the event of WAFS node reboots,
WAN disruptions, or file server unavailability.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Availability and Integrity Benefits
End-to-End Security
TCP protocols often employ security at the application layer (layer 7) to
ensure complete end-to-end security between the client- and serverside components of an application. For example, the Server Message
Block (SMB) protocol provides the basis for Microsoft file and print
sharing and many other networking operations, such as remote Windows administration. To prevent man-in-the-middle (MTM) attacks,
which modify SMB packets in transit, the SMB protocol supports the
digital signing of SMB packets. (In such attacks, a hijacker can potentially extract important user credentials by snooping packets out of a
CIFS conversation. With this information, the hijacker would have the
ability to mimic a CIFS client to any CIFS server in the given domain
where the users’ credentials are valid.) SMB signing ensures that the
identity of the file server matches the credentials expected by the client and vice versa. This policy setting determines whether SMB packet
signing must be negotiated before further communication with an
SMB server is permitted.
NOTE: Windows Server 2003 comes with a number of default security
templates that are designed to help meet the needs of different computer security levels and functions. SMB Packet Signing is enabled by
default on both the Highly-Secure (Hi-Sec) and Secure templates.
SMB signing has become a necessity with the release of a readily
available hacker tool called SmbRelay, which automates a man-in-themiddle attack against the SMB protocol. SMB signing protects against
SMB session hijacking by preventing active network taps from interjecting themselves into an already established session. Inline network
optimization products rely on packet interception to implement their
optimization and lose their ability to provide latency optimization when
SMB packet signing is enabled. As a result, these classes of products
force organizations to make a choice between security and
Introducing File Area Networks
Availability and Integrity Benefits
Figure 45. WAFS security advantages
Brocade WAFS natively supports SMB packet signing. It is unique
among WAFS/WAN optimization products to fully support CIFS optimization when SMB packet signing is enabled between the CIFS client
and servers. Enabling IPsec as a default security policy between CIFS
clients and servers is yet another example that highlights the inability
of the protocol snooping approach to operate in such environments,
where Brocade WAFS natively supports such security policies.
Figure 45 shows the difference between Brocade WAFS end-to-end
security versus competing approaches.
Full CIFS Disconnection Support
Brocade WAFS storage is tightly integrated with file-aware differencing
and dictionary-based data reduction techniques in a single architecture. The locally held copy is kept up-to-date and treated efficiently
during open and save operations. The distributed file system technology synchronizes the data center copy with the WAFS node in the
remote office to make sure it is updated with the most recent changes,
guaranteeing 100% consistency. The performance benefit to end
users, along with file coherency and consistency, ensures that users
can access files even during WAN disruptions. When the WAN connection is restored, files are synchronized automatically.
Introducing File Area Networks
Chapter 6: Brocade WAFS
WAFS shields end users from WAN disruptions of up to several minutes, allowing disconnection support under these conditions. In
certain environments, such as wireless/satellite or very remote locations, longer-term periods of WAN disconnection can be regular
events. Additionally, even at remote offices in urban areas, WAN disconnections can occur due to operator error, service provider
scheduled maintenance, or physical disruption. The WAFS distributed
file system implementation allows branch users to work as seamlessly
as possible through periods of WAN disconnection without risking
coherency and consistency of data.
This is a guarantee that no network-level acceleration solution can
offer. Only file-aware products can provide this benefit.
In disconnected mode, the branch office node enables the following:
Allow browsing of cached directories/folders.
Allow read/write access to any cached file.
Provide the ability to continue working and save changes to a file
that was opened prior to the disruption. The changes to the file
are transparently updated to the data center file server upon
resumption of connectivity.
Allow creation of new objects (files and directories) through the
branch office node during the disconnected mode.
When WAN connectivity is restored, the new objects are created and
any file updates are transparently applied to the data center file servers. The feature provides corrective actions in case of conflict
scenarios by retaining both versions of the document in conflict and
notifying the administrator via an SNMP trap or e-mail to take the corrective action.
It is important to note that all the above operations can be carried out
transparently on the same network drive that is used to access files
prior to the disconnection. Other solutions can require remapping of
network drives to a local store on the branch office node in their
attempts to provide disconnection capabilities. This is unnecessary in
Brocade WAFS.
Introducing File Area Networks
Other Architectural Benefits
Other Architectural Benefits
Single Instance Storage
Brocade WAFS Single Instance Storage technology ensures that file
data is only held locally at the edge—close to branch office users—with
no data mirroring needed at headquarters. Freed from a requirement
to double-size the data center, much larger numbers of branch office
nodes can be deployed against a single node at the corporate data
Other WAFS solutions can require byte-for-byte matching mirrored at
the data center, leading to scalability limits and spiraling costs. Brocade WAFS provides the enterprise with flexibility in establishing its
cache storage size, supporting RAID 1 and RAID 5 High Availability (HA)
implementations. Although a WAFS node offers a storage size of
250 GB cache, it actually provides access to all available data center
storage—whether that is a 10-GB server at a singe data center or 100
terabytes spread across several data centers.
The branch office node storage cache is self-managing and monitors
persistent RAID-based storage, intelligently providing real-time access
to the active set of data at the branch office.
Transparent Pre-Population
WAFS can ensure that files are ready for users to work on when they
need them by pushing out the file data (“pre-population”) to its peer
nodes at regular intervals, rather than waiting for the data to be
requested by remote users. There is no need for external agents or
applications to perform this function. IT personnel can pre-position
data at the edge—in advance of user requests—to distribute network
traffic more evenly throughout the workday and improve branch office
Edge Office IT Services
With local IT services running on the WAFS Edge node, users at the
branch office will experience better response times to their print,
authorization, and network service requests. Remote office hardware
can be eliminated and IT management and administration can be simplified by consolidating multiple server functions onto a single device.
File services, print services, domain control, and authentication are
examples of functions that can be handled by the Brocade WAFS node.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Figure 46. Using WAFS to consolidate services in a branch office
Print Services
Print services provided by the WAFS Edge node handle printing locally,
without requiring print requests to traverse the WAN. Branch office IT
administration and associated costs can be eliminated by the removal
of dedicated branch office print servers.
Print service features on the Brocade WAFS node include:
Native Windows Point and Print architecture
Active Directory-enabled printer browsing
Simplified print cluster installation and administration
Web-based administration
Domain Controller Services
User logon processes, authentication, and directory searches for Windows domains are handled locally at the branch office with WAFS
Domain Controller services. Using the native functionality available on
Windows Server 2003, it has the following features:
Local processing of authentication/login requests
Support for these services during WAN outages
Introducing File Area Networks
Edge Office IT Services
Network Services
WAFS network services take the headache out of administering
branch office network access while improving end-user performance.
A single WAFS Edge node can host DNS and DHCP server functions—IT
administrators can consolidate critical networking services into a single footprint. Some features include:
DNS caching for branch office name resolution.
Assignment of IP addresses via DHCP.
Web Caching Services
Faster delivery of Web pages is achieved via HTTP object caching, thus
providing rapid and seamless branch office Web access and lower
bandwidth consumption. Built on Microsoft ISA (Internet Security and
Acceleration) server technology, WAFS Web caching services meet the
performance, management, and scalability needs of high-volume
Internet traffic environments with centralized server management,
Support for secure Web sites accessed via SSL
Easily configured port numbers and cache size
Scheduled cache pre-population
Optimization for “split-tunnel” branch office environments with
direct Internet access
Management Services
Brocade WAFS management services optimize software and application distribution at the branch office. Microsoft Systems Management
Server (SMS) packages can benefit from WAFS with faster package
download and caching at the branch office. The benefits include:
Central software distribution for remote office
Faster download of software packages
No repeat transmission of upgrade packages
Seamless integration with WAFS deployments
Introducing File Area Networks
Chapter 6: Brocade WAFS
Protocol Acceleration
WAFS Protocol Acceleration technologies are designed to overcome
the effects of latency on CIFS, NFS, FTP, HTTP, and other TCP-based
protocols. These protocols were optimized for use across LANs. They
send many acknowledgement requests and responses, and break
data into chunks to transport them on the network. These acknowledgements—which are required to send or receive any data at all—are
negatively impacted by latency. The longer the distance, the higher the
latency—which results in increasingly degraded performance.
Application flows cannot ramp up to take advantage of the available
bandwidth, and the result is longer connection setup times and
underutilized links. Ironically, the more bandwidth is available, the
more performance is wasted because only a small percentage can be
utilized. Applications just sit idle waiting for tiny packets to make
repeated round trips across the WAN, instead of filling up the available
pipeline. Brocade WAFS delivers optimization for TCP and other protocols, which greatly reduce the impact of latency.
TCP Acceleration
TCP is a reliable protocol used for transmission of data over IP networks. However, there are TCP behaviors that work against higher
latency connections. TCP utilizes a sliding window mechanism to limit
the amount of data in flight at any time. When the window becomes
full, the sender stops transmitting until it receives new acknowledgments. Over long-distance networks where acknowledgments are slow
to return, the TCP window size sets a hard limit on the maximum
throughput rate.
Introducing File Area Networks
Protocol Acceleration
Figure 47. TCP performance in a LAN vs. a WAN
The WAFS TCP Protocol Acceleration technology reduces the impact of
network latency on the TCP protocol, and increases window sizes to
enable maximum throughput for high-bandwidth, high-latency connections. TCP-based applications, including Web (HTTP) applications,
Microsoft SharePoint, FTP, replication, and backup applications, benefit from TCP Protocol Acceleration. It works by replacing TCP with a
protocol specifically optimized for long delay, high bit error, and asymmetric bandwidth conditions.
Each WAFS node transparently intercepts TCP connections from clients. It provides local acknowledgements to remove the effects of
latency over the WAN and then converts the data to its own WAN-optimized transport protocol for transmission over the network. The WANoptimized transport protocol uses a much larger TCP window size,
allowing higher throughput independent of the TCP window size of the
end nodes. The WAFS node on the opposite side of the link then translates the data back to TCP for communication with the branch office
user’s computer or the server at the data center. This can have a dramatic impact on application performance. Typical case benefits are
illustrated by the bar graph in Figure 48.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Figure 48. TCP protocol acceleration impact on applications
WAFS TCP Protocol Acceleration establishes a tunnel of optimized TCP
connections between WAFS nodes. As part of application request and
responses, connections are initiated by branch office clients to data
center servers. A pre-established connection from the tunnel is used
for each connection request, thereby eliminating time spent in the initial handshake to set up TCP connections. The pool of connections
also gives the ability to transfer multiple data blocks simultaneously in
a single time period, rather than use a single connection where transfers for data blocks are required to be queued up. This technique
effectively translates to a much larger window size between the client
and server, allowing higher throughput independent of the TCP window
size of the end nodes.
The WAN optimized tunnel also employs transparent WAN retransmission techniques for lost packets in connections, in which select
acknowledgements for lost packets are used to trigger retransmission
of TCP packets. This greatly improves default semantics of TCP where
all packets in a session are retransmitted, even when only a single
packet is lost in transit.
Introducing File Area Networks
Protocol Acceleration
WAFS Transport Protocol Acceleration
CIFS and NFS suffer from issues similar to other TCP protocols, but
they also present unique problems due to the fact that they employ
numerous Remote Procedure Calls (RPCs) for file-sharing operations.
Since CIFS and NFS were designed for reliable networks, frequent disconnections or latency characteristics were not a design
consideration, thus they require special treatment when being used
across a WAN. Some operations in CIFS and NFS are more susceptible
to interruptions than others, while some others require frequent retry
operations. Additionally, the NFS protocol involves a repetitive use of a
file open/close RPC for every file read/write operation.
This design limitation of file-sharing protocols requires optimization
technologies that possess sophisticated application-level intelligence
(operating at Layer 7 in the OSI network stack) to further mitigate WAN
round trips by applying specific heuristics based on a deep understanding of CIFS/NFS protocols.
Instead of CIFS or NFS, Brocade WAFS nodes use a WAFS Transport
Protocol to communicate with each other. While being totally transparent to the CIFS or NFS servers at the data center, this WAN-optimized
file-sharing protocol leverages several different technologies to accelerate files and updates across WAN links.
To remove the transmission overhead of standard protocols, WAFS
uses a technique known as “data streaming,” which dramatically
improves file access times and reduces network overhead. Standard
file-sharing protocols such as NFS and CIFS suffer from poor performance over the WAN because they wait for a unique
acknowledgement for every 4 KB of data sent across the WAN. The
next 4-KB block of data will not be sent across until the acknowledgement for the previous 4-KB block is received. Given the high round-trip
times in a WAN, the end-to-end can spend more time waiting for
acknowledgments than it spends actually transmitting data.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Figure 49. CIFS over a WAN without acceleration
In contrast, WAFS opens the file, compresses it, and then streams the
entire file across the WAN without waiting for per-block requests.
Acknowledgements for individual block requests are sent to the CIFS
client at LAN speeds, since all relevant file blocks are readily available
at the local WAFS edge node.
Figure 50. CIFS over a WAN with acceleration
Introducing File Area Networks
Data Reduction
Several additional technologies are layered into the WAFS Transport
Protocol to further improve performance over the WAN. Read-ahead
caching ensures that in response to a particular file block request, the
branch office WAFS node starts fetching the entire file, anticipating
future file access patterns. This way, data is readily available in the
cache data store, ahead of the read requests.
Data aggregation and I/O clustering techniques guarantee the efficient use of precious WAN bandwidth. Data aggregation ensures that
information regarding file updates made by several users at the
branch office is aggregated into a single message within the WAFS
Transport Protocol. It conserves multiple round trips and packs data
more effectively. I/O clustering allows updates made to various parts
of a file to be gathered over a short period of time and sent as a single
I/O update request with references to the individual changes.
With all the separate technologies incorporated in the WAFS Transport
Protocol working in concert, typical users find 10 to 20X, even up to
100X plus, improvements in performance across the WAN.
Figure 51. Impact of acceleration on file save time
The WAFS Transport Protocol in Brocade WAFS products is designed to
gracefully survive both short interruptions and longer network outages
while insulating the end user and ensuring the security and integrity of
the data. Sophisticated resiliency algorithms, which have been optimized for large file transmission, ensure the integrity of the data.
Data Reduction
Leveraging aggressive compression and optimization technology that
is file-aware and application-intelligent, Brocade WAFS dramatically
reduces the amount of data sent across the WAN. WAFS compression
technologies increase WAN capacity—with as much as 99 percent
reduction in bandwidth—over the same physical links, improving application performance and user response times. WAFS optimization
intelligently utilizes data and file stores held locally at the branch
office to cut back on redundant and unnecessary data transport, while
making the most of existing bandwidth.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Data Compression
The Brocade WAFS solution incorporates two compression techniques
to minimize the amount of physical data sent over the network. These
techniques significantly reduce the amount of data that must be transported, saving both time and WAN bandwidth.
TCP Flow Compression
TCP flow compression reduces data at the flow level. Individual TCP
packets are compressed as they move across the network the first
time. This stateless compression allows for data reduction on readily
compressible data streams, even on the first pass through the nodes
using well-proven algorithms similar to LZ data compression.
For example, if an HTML document is compressible by 70 percent
using applications such as WinZip, then bandwidth reduction benefits
of 70 percent will be immediately realized when the document is
transferred for the first time through the WAFS nodes. Thereafter, for
subsequent transfers of the document, further level of data reduction
benefit is made possible from the dictionary-based compression
Dictionary-Based Compression
Dictionary-based compression reduces data on the wire at the stream
level for all TCP protocols. With this technology, new data streams are
catalogued and assigned a tag on both nodes at either end of a connection. When the sender attempts to transfer the same data stream
again, the much smaller tag is transferred on the WAN instead of the
actual data stream. On the other end of the WAN link, the tag is used
to index the locally held data stream in the dictionary, and the original
data stream is sent locally to the receiver. The result is elimination of
redundant data transfer over WAN links, achieving compression ratios
of up to 200 :1.
Among all modules in action for WAFS TCP optimization, the dictionarybased compression module works last in the optimization pipeline for
outbound packet flow (toward the WAN) and first for inbound packet
flow (from the WAN). Since basic LZ-based compression is applied
prior to the WAFS dictionary-based compression, this technology
brings effective data reduction even for non-compressible data types,
such as JPEG images.
Introducing File Area Networks
Data Reduction
Figure 52. Dictionary Compression
Dictionary compression brings two key benefits to WAFS:
First, cross-protocol data reduction is made possible by dictionary
compression’s ability to detect commonalities in data streams
being transferred across various protocols. For example, if a JPEG
image is first requested through HTTP and then later over FTP or
CIFS, then the image is transferred only once over the WAN.
Second, even within a specific protocol, if a document is transferred under different names or with some underlying similar
content, data transfer for common content between the documents occurs only once. For example, a Word document can be
renamed and saved as a new document, effectively utilizing only a
single data transfer for the document in spite of recurring file
Application-Aware Data Reduction
The dictionary-based compression provides a good level of data reduction for most TCP protocols by effectively breaking down data streams
into blocks and providing intelligent caching for the individual data
blocks. For certain application protocols, this benefit can be further
extended through application-specific caching by using in-depth knowledge of application semantics and behavior patterns.
The WAFS nodes use file-aware caching technology for CIFS/NFS protocols, which effectively caches the file in its entirety rather than
caching block-level segments of the file. By being file-aware, in case of
file updates the technology provides the ability to move only the
changes to a file across the WAN rather than the entire file. This
ensures that the amount of data sent back and forth during the opening and saving of files is a fraction of what would normally be required.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Proxy file locking employed by the WAFS nodes ensures that all file
write and update operations can be safely sent asynchronously to the
data center.
The file writes are logged to disk on the branch office WAFS Edge
node, sending fast local acknowledgments to user applications so they
do not have to wait for this action to be completed over the WAN. Fileaware optimization also dramatically improves file opening performance by serving locally held files out of the file-aware cache store.
Even in cases where the file may have been updated at the data center, the distributed file system ensures that the locally held copy is first
differenced and brought up-to-date against the data center copy. The
differencing and updating phase involves moving just the compressed
changes across the WAN—dramatically reducing the amount of data
that needs to be transferred.
MAPI/Microsoft Exchange
Up to 90 percent of Microsoft Exchange storage is in the form of large
e-mail attachments. Without the Brocade WAFS solution, e-mail
access to Microsoft Exchange Servers via the WAN is tediously slow
and presents a major barrier to consolidation. Application-intelligent
optimization for Exchange natively integrates with Microsoft Exchange
and Outlook environments to speed the transmission of e-mail attachments over the WAN by holding and distributing e-mail attachments
from remote nodes. This ensures that e-mail attachments are delivered only once per branch office and that only file differences are
transmitted via the WAN when e-mail attachments are revised and
The WAFS Core node in the data center extracts attachments from
users’ mailboxes and stores compressed, encrypted versions of the
attachments on the cache data store. These attachments are then
transferred via the WAFS Transport Protocol to the branch office WAFS
nodes in an efficient manner, resulting in data reduction benefits.
When branch office users open e-mail attachments, the WAFS
Exchange Outlook option installed on clients redirects the requests to
be served locally from the branch WAFS node. The attachments are
served at LAN speeds to branch users, thus providing latency optimizations. Bandwidth reduction benefits are observed through a singleinstance transfer and use of the cache data store of the WAFS Edge
node for all recipients of the e-mail attachment.
Introducing File Area Networks
Deploying WAFS
With WAFS application-intelligent caching of static Web pages, Web
objects are cached on the local WAFS Edge node at the branch office
and delivered through a Web proxy. The branch WAFS Edge node maintains a cache of frequently used Internet objects that can be accessed
by any browser client and that provide the content when subsequent
users request the same data. Objects accessed from the disk cache
require significantly less processing than objects accessed from the
Internet. Using cached content improves client browser performance,
decreases user response time, and reduces bandwidth consumption
on the Internet connection. The cached data is available even when
the Internet server is inaccessible because of a downed Internet connection, or even when the Web server itself is offline.
Deploying WAFS
The following sections discuss how the WAFS solution is deployed
throughout the organization.
At the Data Center
The WAFS Core node is installed at the data center or main site to support its community of branch office WAFS devices—without needing
one-for-one mapping. Matching data capacity is not required since the
unique single instance storage technology does not require mirroring
of the distributed file system data.
At the Branch Office
In a distributed branch topology, in which storage is virtualized, there
are two options for user access:
CIFS: Mapped Drives
Windows PC users can map network drives directly using the explicit
name of the onsite WAFS node. This method has the benefit of providing a consolidated and virtual namespace to aggregate the path
names from servers located at multiple data center locations.
Introducing File Area Networks
Chapter 6: Brocade WAFS
Figure 53. Mapping Drives with WAFS
CIFS: DFS-Based Namespace Transparency
Using Microsoft Distributed File System (DFS), clients use a common
and consistent naming convention—UNC paths—to access storage in
any location across the enterprise. Combining Microsoft and Brocade
WAFS technologies allows centralized management of storage and the
presentation of stored files to users at LAN speeds in a simple, unified,
and consistent manner. Brocade StorageX can be used in conjunction
with DFS to enhance the setup and automate failover and rerouting
In a DFS environment, the data center file servers and the WAFS Edge
node serve as DFS targets. The CIFS shares from all WAFS Edge node
are defined as link targets for every share that is meant to be optimized from the data center file server. The DFS server automatically
routes CIFS client requests to one of its targets, the data center file
server or the branch office WAFS Edge node, based on the network
route to the target. For example, a clients PC residing in a branch
office is redirected to the local WAFS Edge node in the branch office,
whereas a client PC in the data center uses direct access to the file
Roaming users are transparently routed to the nearest WAFS Edge
node, ensuring fastest file transfer performance. If a WAFS Edge node
becomes inoperable or is taken out of service, DFS (and/or StorageX)
transparently reroutes the client PC to the nearest WAFS node or to
the data center file server. In this manner, a transparent namespace
approach removes the need for users to remap drives and supports
automatic failthrough.
Introducing File Area Networks
Deploying WAFS
The benefits of transparent namespace for WAFS using Microsoft DFS
and StorageX can also be realized with network-level redirection techniques. To deploy WAFS TCP acceleration, use either the WCCP or PBR
method described below.
Web Cache Communication Protocol (WCCP) offers seamless integration into networks and transparent Web cache redirection. This on-LAN
deployment—not inline—requires router support for WCCP, which
enables routing platforms to transparently redirect content requests to
the WAFS node. The main benefit of WCCP-based transparent redirection is that users do not have to make any configuration changes to
their desktops to leverage the benefits of WAFS optimization
Figure 54 illustrates the deployment mode for TCP WAN optimization
with WCCP redirection:
Figure 54. WAFS plus WCCP Redirection
Policy-Based Routing
Policy-Based Routing (PBR) involves forwarding policies on routers to
redirect traffic meant to be optimized to the WAFS node. The node is
configured and installed in a “one-armed” deployment mode with
respect to the router, and is thus not inline. The router must be configured to forward packets based on a router policy using the destination
servers and ports of interest to the WAFS node.
Introducing File Area Networks
Chapter 6: Brocade WAFS
High Availability Deployments
Redundant configurations at the data center and branch offices provide business continuity and compliance by enabling High Availability
(HA) features. WAFS provides HA in several ways:
Node Redundancy
You can configure two WAFS nodes as a redundant cluster, so one
node takes over in case the other node fails. The redundant node synchronizes its state and data with the active node in its idle time,
staying up-to-date in the event it is called upon to take over.
WAFS failover technology keeps individual nodes highly available at
the data center and remote offices. Any critical node can be paired
with another in an active/passive configuration. The system state and
locally held data are mirrored via a private network connection
between the units. When one device fails, the other comes online and
transparently takes over—and operations continue without manual
Site Redundancy
An organization can establish a DR data center at another location as
part of its business continuity strategy. WAFS supports topologies in
which a redundant node can be deployed and configured at the secondary data center, which is ready to take over when disaster strikes.
When the primary data center goes down, normal branch office operations are restored by the activating Core node at the secondary data
center and is transparent to all connected branch office nodes and
branch office users. This leverages any locally cached data in the
branch office nodes and preserves user performance and business
Hardware Redundancy
Business continuity is ensured with RAID, redundant power supplies,
and cooling fans recommended for WAFS hardware configuration
guidelines (and used in the WAFS appliance). In the event of a failure,
these hardware redundancies ensure seamless operation until the
replacement parts are in place.
Introducing File Area Networks
Deploying WAFS
Flexible Platform Options
Platform Operating Systems
Companies organize networks and choose operating systems (OSs)
and file-sharing protocols to best solve the issues they face. Brocade
realized this from the beginning and designed the WAFS architecture
for platform independence. Trying to “shoehorn” a Linux-based appliance into a Windows-only company or forcing a Windows-based
appliance onto a Solaris and Linux group might cause management
The WAFS architecture uses a cross-platform code base that runs on
both Windows and Linux. This means that customers can choose the
platform that best fits into their network for seamless integration into
their infrastructure without sacrificing functionality or performance.
Windows Platform
Brocade has taken the necessary steps to provide a scalable and
enterprise-ready solution that is future-proofed against protocol
changes. As protocols such as CIFS and Messaging Application Programming Interface (MAPI) change over time, competitive network
optimizer solutions will be forced to trail the introduction of new OS
releases by months or even years as they reverse-engineer the protocol, if indeed they are able to do so. Since the Brocade WAFS
architecture leverages native protocols and separates acceleration
and optimization from specific versions of these protocols, WAFS will
always remain current as technologies change.
Of all current WAFS and WAN optimization vendors, only Brocade supports all of the native Microsoft Windows features that today’s
organizations need to support their enterprise, as listed and described
SMB Packet Signing. Brocade WAFS natively supports SMB Packet
Signing, a technology which ensures that the identity of the file server
matches the credentials expected by the client and vice versa.
Kerberos Authentication. WAFS natively supports both Kerberos and
NT Land Manager (NTLM) authentication modes. As Microsoft phases
out NTLM authentication with the Windows NT operating system, the
WAFS product will continue to provide compatibility with Windows
Introducing File Area Networks
Chapter 6: Brocade WAFS
Active Directory Integration. To support enterprise best practices,
WAFS Windows-based nodes can fully participate in an Active Directory
infrastructure. In particular, the nodes appear in the domain and can
be managed as members of the domain.
Domain Controller Service. Unique among all of the WAFS and WAN
acceleration vendors, Brocade WAFS can support the installation of
the Active Directory Domain Controller on its nodes. This allows user
login processes, authentication, and directory searches for Windows
domains to be handled locally at the branch office—effectively improving end-user response times.
Active Directory Group Policies. Integration with Active Directory Group
Policies enforces certain security or management policies for a group
of Automated Deployment Services (ADS) objects such as servers.
WAFS Windows-based nodes honor group policies since they are a part
of the Active Directory domain. For example, domain administrators
can restrict or disable local administrative or other user accounts without having to configure each node individually.
Linux Platform
The WAFS solution is available for Linux-based servers to provide the
WAFS and WAN optimization benefits for all TCP protocols. The Linuxbased software can provide best-in-class optimization for the NFS protocol through its unique distributed file-system-based proxy
architecture. The Linux implementation has feature sets comparable
to the Windows-based product, with the exception of the benefits specific to the native Windows platform.
WAFS Linux-base software is a good fit for collaborative customer environments in the following industry verticals, in which NFS-based files
are prevalent:
Software development
Engineering /R&D
Architectural Design
Introducing File Area Networks
Deploying WAFS
Platform Hardware
Many enterprise customers have procurement and support agreements with IBM, HP, Dell, or other server hardware vendors. These
customers may prefer to use their own server hardware and Microsoft
Windows Server 2003 R2 OS licenses and just purchase WAFS software-only solution from Brocade. By running WAFS software solutions
on your preferred server-class platform, the solution can fit into your
existing procurement processes, support agreements, and hardware
infrastructure. Two software-only options are available. as described in
the following sections.
Software for Qualified Servers: Full Install
Full Install effectively replicates the exact partitioning and configuration of a Brocade WAFS appliance and supports all available features.
Microsoft Windows Server 2003 R2 has to be provided by the customer and preinstalled on the server by Brocade Professional
Services. Currently this product is limited to the following qualified
server-class platforms:
IBM xSeries 336
IBM xSeries 346
HP DL380 G3
HP DL380 G4
Dell PE 1850
Dell PE 2850
Additional platforms are added periodically, and you should contact
your local Brocade sales representative for current information.
Software-Only Distribution: Branch Office Software
Branch Office Software can be installed on any machine that meets
the minimum requirements for disk size, memory, and processor.
Refer to the product documentation for up-to-date requirements. The
following features are not supported in Branch Office Software:
Brocade WAFS Exchange Services (WES) upgrade
Failover (Core or Edge)
Bandwidth throttling
NOTE: Full Install and Branch Office Software are available on CD or
via download from
Introducing File Area Networks
Chapter 6: Brocade WAFS
Chapter Summary
With an average of 75 percent of corporate data now residing outside
a corporate data center, it makes sense that IT groups would want the
option of centrally managing, storing, and protecting important data
that exists at the periphery of their enterprise. Brocade WAFS provides
IT departments with the tools to do just that, while maintaining enterprise-class performance and availability levels at remote offices.
Companies are often disappointed when they do not reap the benefits
they expected from data consolidation and when worker productivity
plummets. On the other hand, IT managers who select solutions that
address latency challenges can keep branch office users traveling “in
the fast lane,” while their companies enjoy the cost savings and control consolidation projects can give them.
A point product may help solve parts of the branch-site application performance challenges. However, its limited functionality may mean that
the IT organization still has to manage a range of other devices at the
remote site, from print to Web proxy to DNS servers—all without a dedicated IT person in residence to manage day-to-day operations. What is
called for is an integrated wide area performance solution. Brocade
WAFS is literally a “branch office in a box,” with secure, real-time data
access for remote users. With WAFS, organizations can boost the WAN
performance of applications and file services, servicing thousands of
users on a single server node. This approach enables companies to
meet their WAN optimization challenges with less redundant equipment and without requiring an upgrade to their WAN links.
Introducing File Area Networks
Brocade MyView
Brocade MyView is discussed at a high level in “Chapter 3: Building
Blocks” starting on page 31. This chapter provides more detailed information about deployment and management in the following sections:
“System Requirements” on page 148
“Namespace Design Considerations” on page 149
“Namespace Implementation” on page 151
“Chapter Summary” on page 152
Persistent news headlines about data theft, new federal regulations,
and accounting irregularities demonstrate that these issues are prevalent today in corporate America. Using MyView to create personalized
namespaces for users can help organizations combat data theft, comply with regulations, and audit internal activities. MyView software
provides a centralized view of user access and authorization to distribute data throughout the enterprise.
MyView enables administrators to view enterprise-wide user access
and authorization from a single, unified console. It allows users to
view and access resources for which the user has been granted permissions throughout the enterprise via personalized namespace. In
addition, it allows auditors to monitor which users have access to
which resources.
Introducing File Area Networks
Chapter 7: Brocade MyView
The goal of MyView is to simplify access to resources through
namespace virtualization, personalization, and organization.
Virtualization. Administrators can virtualize access to resources
through the automated creation of a personalized namespace for
every user within the enterprise. A namespace enables users to
access their files without having to know the physical location of the
files. The MyView namespace permanently eliminates drive letter
dependencies, providing administrators with flexibility in expanding,
moving, and reconfiguring the underlying physical storage without
affecting users.
Personalization. An administrator creates a personalized namespace
based on the Active Directory domain authentication model, so that
users have their own “personalized view” that displays only those
resources they have authorization to access.
Organization. Personalized namespaces can be organized by categories such as owner, department, location, and permission—enabling
users to view their resources in the way that makes the most sense to
their individual organization.
For systems administrators, MyView provides the following benefits:
Simplifies user administration, thereby lowering the number of
support requests
Increases data control by protecting and preserving enterprise
security rules and policies
Increases infrastructure control through accurate documentation
of the security model
For users, MyView provides the following benefits:
Enables sharing of data
Eliminates drive letter issues
Simplifies access to data
Provides customized views of resources
Provides a single, logical view of the enterprise
Brocade MyView is not itself a complete standalone namespace product. Rather, it leverages the Microsoft Distributed Filesystem (DFS)
Namespace (in Windows 2003). DFS is a storage service that can help
solve many complex problems in the Windows environment. While
MyView uses DFS, it also addresses some of its limitations.
Introducing File Area Networks
For example, MyView can quickly create namespaces with more links
than the limits imposed by a single DFS-based root. When necessary, it
cascades the namespace across multiple roots. It consolidates links to
minimize the number of servers needed to host roots. It can also prepare the namespace for disaster recovery integration by managing the
number of targets referenced by links in a namespace, to reduce the
time that it takes to fail over during recovery.
The MyView desktop, shown in Figure 55, is where the administrator
manages personalized namespaces.
Figure 55. MyView Desktop
From the MyView console, administrators can quickly drill down into
four areas: Configuration, Namespace, Users, and Resources, listed in
the left pane of the window.
Introducing File Area Networks
Chapter 7: Brocade MyView
System Requirements
Before you install Brocade MyView to build a Personalized Namespace
(PN), make sure you review the following checklists of installation
MyView Server and Client Requirements
The following are server requirements for Brocade MyView:
Windows XP or Windows 2003 or later
Windows Script Host 5.6 or later
The following are client requirements for Brocade MyView:
Windows 2000 SP3 or later
Microsoft .Net Framework 1.1
Internet Explorer 5.5 or later
MyView Database Requirements
The following are database requirements for Brocade MyView:
Microsoft SQL Server 2000 Desktop Engine (MSDE) SP3 or later
Microsoft SQL Server 2000 Standard or Enterprise with SP3
Microsoft Data Access Control (MDAC) 2.60 or later
MyView Disk Space Requirements
You should have enough available disk space for any of the following
components you want to install:
Brocade MyView self-extracting file: 35 MB
Client only: 55 MB
Server only: 55 MB
Client and Server: 60 MB
Server and MSDE: 150 MB
Client, Server, and MSDE: 150 MB
Introducing File Area Networks
Namespace Design Considerations
Namespace Design Considerations
To achieve the benefits of namespace virtualization, consider layout
and design. Well-architected namespaces are designed around an
organizing principle such as geographical locations, functional areas,
or hierarchical departments. This section discusses considerations for
designing a namespace to ensure that it simplifies management
rather than adding a layer of unnecessary complexity.
Reduce Complexity
A single namespace can be designed with a great deal of complexity.
For example, one namespace can provide many different paths to the
same data. However, you can also create multiple namespaces to
serve the needs of different audiences. For example, you could use different namespaces for Web content, software distribution, and so on.
You may find that in the long run it is easier to manage multiple
namespaces than to create a single extremely complex namespace.
Consider Reliability
Consider reliability requirements during the logical namespace design
process. It is best practice to use domain-based roots as the entry
point, because DFS replicates the configuration of the namespace
(creates root targets) to allow it to be available from multiple servers.
This eliminates the single server and prevents the namespace from
being a single point of failure.
Cascaded Namespaces
Windows Server 2003 supports creating links that point to other DFS
namespaces. This kind of link is often referred to as an “interlink” or
“cascaded namespace.” Instead of specifying a shared folder as a link
target, you can specify a DFS root or link within another namespace,
allowing you to create a hierarchy of namespaces, or a cascaded DFS.
Cascaded namespaces are often used to delegate administration
more easily. If multiple groups within an organization want to manage
their own namespaces, they can do so and still present a single cascaded DFS namespace to their users.
To increase DFS scalability, organizations can combine the availability
benefits of domain-based DFS namespaces with the scalability of
standalone DFS namespaces. For example, if you need to create
10,000 links, but do not want to divide these between two domainbased DFS namespaces, you could take the following steps:
Create a standalone DFS namespace with 10,000 links.
Introducing File Area Networks
Chapter 7: Brocade MyView
Create a domain-based DFS root.
Under the domain-based DFS root, create a link that points to the
standalone DFS namespace.
When linking to other namespaces, follow these guidelines to make
certain that clients can be redirected properly if a target is unavailable:
If you plan to specify a domain-based DFS namespace as a link
target (either the root or a link within that namespace), you cannot
specify alternate link targets. (Windows Server 2003 enforces this
If you plan to specify a standalone DFS namespace as a link target
(either the root or a link within that namespace), you can specify
alternate link targets to other standalone DFS paths. Do not specify domain-based DFS roots or shared folders as alternate targets.
IMPORTANT: The DFS tools do not prohibit you from specifying domainbased DFS roots or shared folders as alternate targets. Although you
will be allowed to do this, the resulting configuration will not function
When linking to other namespaces, review the following restrictions:
A DFS path can consist of no more than eight hops through other
DFS namespaces.
Clients running Windows 98 might not correctly access links pointing to other DFS namespaces. Windows 98-based clients can only
access the following types of links to other namespaces:
A link in a standalone DFS namespace that points to a standalone DFS root or link.
A link in a domain-based DFS namespace that points to a
standalone DFS root. This technique works only if the client
has the latest Active Directory client installed, as described in
Knowledge Base article 323466, “Availability of the Directory
Services Client Update for Windows 95 and Windows 98.”
Introducing File Area Networks
Namespace Implementation
DFS Namespace Sizing
Sometimes it is necessary to estimate the size of each root and link in
the DFS Active Directory object. Each element in the namespace (root
name, link name, target path, and comments) takes up space in the
DFS Active Directory object. The amount of space can vary depending
on the number of characters used in each element.
The following formula assumes that each element is 10 characters:
Root: Approximately 300 bytes
Each root target: Approximately 150 bytes.
Each link in the root: Approximately 320 bytes.
Each link target: Approximately 120 bytes.
For example, if you create a root (300 bytes) with three root targets
(150 bytes × 3) and then add 100 links (100 × 320 bytes) with each
link having three link targets (100 × 120 bytes × 3), the DFS Active
Directory object will be approximately 300 + (150 × 3) + (100 × 320) +
(100 × 120 × 3) = 67 KB) in size.
Namespace Implementation
When creating a namespace, follow a rollout procedure similar to the
Identify the domains that contain users
Identify the Active Directory domains that contain users who will
access their data through the namespace.
Decide on the root configuration.
Decide whether to use a standalone or domain-based root as the
entry point root to your namespace.
Decide on the organization of the namespace
Identify the shares and folders that you want to associate with the
users and groups you specified previously.
Introducing File Area Networks
Chapter 7: Brocade MyView
Calculate and publish the namespace
Calculate the namespace and review how the namespace will
look. Make any necessary changes and then publish the
Update login scripts or Active Directory
MyView provides example login scripts. To run these, check that
the system is running Windows 2000 or later. In addition, the client machine running the login VBscript or Jscript must be running
Windows Scripting Host 5.6 or later.
Chapter Summary
Brocade MyView creates personalized namespaces for users to help
organizations combat data theft, comply with federal regulations, and
audit internal activities. MyView enables administrators to view enterprise-wide user access and authorization from a single, unified
console. It allows users to view and access resources and allows
auditors to monitor which users have access to which resources.
MyView simplifies access to corporate resources through namespace
virtualization, personalization, and organization.
Well-architected namespaces are designed around an organizing principle such as geographical locations, functional areas, or hierarchical
departments You need to ensure that the design of the namespace
simplifies management rather than adding a layer of unnecessary
complexity, while taking into account reliability and scalability. When
creating a namespace, you need to identify the domains that contain
users, decide on the root configuration, and then decide on the organization of the namespace.
Introducing File Area Networks
Brocade FLM
Brocade File Lifecycle Manager (FLM) is discussed at a high level in
“Chapter 3: Building Blocks” starting on page 31. This chapter provides more detailed information about the product architecture and
real-world advice on deployment and management, in the following
“Effect of Deploying FLM” on page 155
“FLM In Action” on page 155
“FLM Policies” on page 158
“FLM and Backups” on page 159
“FLM Deployment Tips and Tricks” on page 161
“Chapter Summary” on page 182
According to Computer Technology Review, the world is currently seeing “explosive growth” in enterprise data, especially unstructured file
data. Over the past few years, businesses have experienced unprecedented growth in file data—as well as in the importance of that data—
and this trend is not showing signs of slowing. As data volume grows, it
becomes more costly to store the data and more complicated to manage it.
If corporations and governmental organizations were actively using
this data, then it might be worth the expense of maintaining it on toptier storage platforms, which ensures the highest level of performance
and availability. However, most often a very large percentage, say 80
percent, of all stored files are not being used actively. In fact, most
have not been accessed in the last 30 days, as illustrated in Figure 56.
Storing this inactive data on primary storage devices is expensive, inefficient, and no longer necessary in Network Appliance (NetApp)
environments thanks to FLM.
Introducing File Area Networks
Chapter 8: Brocade FLM
Figure 56. Inactive vs. active data
According to the Storage Networking Industry Association, Information
Lifecycle Management (ILM) is “the policies, processes, practices, services and tools used to align the business value of information with the
most appropriate and cost-effective infrastructure from the time information is created through its final disposition.” FLM exemplifies a
comprehensive, easy to use, file-optimized implementation of the ILM
FLM is a data management solution that manages the lifecycle of file
data on NetApp filers, including the migration of inactive data to a variety of secondary storage systems. It provides the following key
FLM bridges the gap between primary and secondary storage.
FLM uses automated policies to classify and manage file data.
FLM remains transparent to users and non-disruptive to
FLM simplifies compliance with information retention and maintenance regulations.
FLM is highly scalable enabling administrators to manage multiple
FLM servers from a single console.
Introducing File Area Networks
Effect of Deploying FLM
Effect of Deploying FLM
The positive impact of deploying FLM in a typical organization cannot
easily be overstated. Figure 57 shows the utilization of primary storage
before and after running FLM in a representative mid- to large- sized
Figure 57. Impact on file storage utilization of deploying Brocade FLM
FLM In Action
FLM accomplishes this by performing three principle activities:
1. Classifies data
2. Moves inactive data to secondary storage
3. Transparently restores data to primary storage on demand
The first two activities are straightforward: Administrators set policies
that define the data to be moved to secondary storage. FLM examines
the target file systems and locates matching files. These files are then
relocated to secondary storage, and small placeholder metafiles are
left behind to indicate the new location of the files that were moved.
Introducing File Area Networks
Chapter 8: Brocade FLM
To understand how the transparent restore operation works, refer to
Figure 58. This illustration shows a user attempting to access a file on
the primary storage after that file has been moved to secondary storage. The storage device “sees” the placeholder metafile and fetches
the actual file contents from its secondary storage location. This takes
longer than if the file had been on the primary storage, but keep in
mind that inactive data does not get accessed very often.
Figure 58. How FLM works
From the perspective of the end user, this process is truly transparent.
When users look for a file in a directory on the primary storage device,
they will see the file name that they are looking for in the place they
expect to find it. If they try to open the file, it will be delivered to them
in just the same way that they were expecting, albeit somewhat slower
than if the file had actually been in the primary storage location.
Introducing File Area Networks
FLM In Action
Figure 59 shows the way a user sees a directory:
Before FLM migrated inactive files to secondary storage (top)
After the migration (middle)
After one file has been restored due to user request (bottom)
Figure 59. User view of relocated files
Introducing File Area Networks
Chapter 8: Brocade FLM
FLM Policies
At the heart of Brocade FLM is a powerful policy creation, management, and execution engine, which is used to simulate what will
happen before executing a “live” policy change, to migrate data from
primary to secondary storage, and to configure the way in which FLM
handles restoration requests. Using FLM, you can configure automatic
deletion of files that have reached the end of their useful existence.
Figure 60 shows the top-level architecture of the FLM policy engine.
There are a number of powerful administrator-configurable parameters in FLM policies, which optimize behaviors. For example, it is
possible to block excessive restores for scenarios in which this operation could be useful, or even critical. In this scenario, a user executes a
find operation that traverses a large portion of a storage device directory structure and looks inside files for matching criteria. This action
would potentially cause most or even all files to be restored from secondary to primary storage. FLM can be configured to monitor this type
of behavior, and avoid making permanent restorations of large volumes of data in a short period of time.
Figure 60. FLM Policy Engine
Introducing File Area Networks
FLM and Backups
FLM and Backups
A primary function of Brocade FLM is to optimize primary storage utilization, but it also has a positive impact on backups, which means that
fewer tapes need to be provided in each backup cycle, and so fewer
tapes need to be purchased. Figure 61 shows the before and after
Figure 61. FLM impact on backups
Deploying FLM means that there is no need to repeatedly back up
inactive files. Backup software traverses the primary storage directory
structure as usual, but FLM ensures that only the metafiles are delivered to the backup server when it requests inactive files. Since
metafiles are much smaller than the corresponding real files, backups
not only use fewer tapes, but are also completed in much less time.
Introducing File Area Networks
Chapter 8: Brocade FLM
To maintain high availability of data, organizations can still chose to
back up the migrated data to tape. The frequency of backups for
migrated data can be much lower than for active data and still guarantee full availability.
The following example explains the process.
1. An IT group performs full weekly backups of their active storage
and keeps five full backup tapes.
2. On January 1, IT does a full backup before FLM does a migration.
3. On January 21, IT performs the migration using FLM, and File A is
migrated to the R200 secondary storage.
4. Subsequent full weekly backups of the active storage contain
Metafile A and not File A.
5. On January 29, the day IT rotates the oldest tape containing the
full backup, they must perform a full backup of the offline store on
the R200, where the migrated File A is located.
6. On February 5, the full backup overwrites the full backup from January 1, leaving no copy of File A on tape.
On February 6, migrated File A becomes unavailable (perhaps it
becomes corrupt or is deleted from secondary storage, or the secondary storage device goes down).
8. Here is the recovery process: a copy of migrated File A can be
obtained from the January 29 tape, which contains the full backup
of the secondary storage.
NOTE: If the entire secondary storage system goes down, File A can be
obtained by retrieving the Metafile A and migrated File A from two
tapes: the last full backup tape of the secondary storage, which contains migrated File A, and the oldest available full weekly backup
associated with the primary storage, which contains Metafile A’.
Introducing File Area Networks
FLM Deployment Tips and Tricks
FLM Deployment Tips and Tricks
When to Deploy FLM
Brocade FLM is often considered for use in configurations in which a
storage device is almost out of space on one or more volumes. If the
file system on the volumes is badly fragmented, as is often the case
when the volume has been running at 99 percent capacity for a significant period of time, this can place an undue performance burden on
the storage device. By introducing FLM before addressing the fragmentation problems, storage device performance can be adversely
affected. In this case, you should consult with Brocade and your Network Appliance representative about options to provide more space for
problem volumes and defragment them if needed.
In addition to disk utilization (in terms of both performance and capacity), CPU utilization should be measured prior to implementing FLM.
Storage devices running between 80 and 100 percent CPU utilization
in single-processor systems may be significantly impacted by implementing FLM without the offline attribute support. Analyze
multiprocessor systems using tools such as “perfstat” or “statit” to get
a more accurate picture of CPU utilization. Again, consult Network
Appliance to ensure that CPU resources are adequate for multiprocessor systems.
FLM is not a good fit for migrating data that users can select to make
available when offline. When users dock or undock their laptops, Windows will trigger restores of any files that FLM has migrated and users
have a copy of on their laptop.
Introducing File Area Networks
Chapter 8: Brocade FLM
General Tips
Brocade FLM should be hosted on a minimum 2GHz dual-processor
server with 2 GB of memory. In addition, it is recommended the you
use Gigabit Ethernet links between the FLM server and the active storage, and between the FLM server and the secondary storage.
To avoid undesired changes to the secondary storage by applications
or users, Brocade recommends the following configuration options:
Make the secondary/offline share a hidden share. For environments using DFS to reference secondary/offline locations, create
a hidden DFS root and place links for the offline shares there.
Set the security of the secondary/offline share to allow access by
only the FLM service account with Full Control. Alternatively, limit
the share to the local administrator group on the storage device/
server where the offline share resides, provided that the FLM service account is a member of that group.
If you are using CIFS auditing to increase scrutiny over changes to
the offline store, then set auditing on the secondary/offline store
and not primary store, as this will have a lesser impact on the environment. Note that if the secondary/offline store is on a filer, then
the CIFS auditing has to be set from a Windows client via the
Advanced Security Properties tab for that folder.
Consider using the offline pooling feature introduced in FLM 3.0.
This feature enables files on the offline store to be distributed
across multiple qtrees or volumes within a single migration rule.
This can be helpful if SLAs control the size of volumes or qtrees in
your environment as a means of limiting the time needed to back
up or restore these items.
Brocade FLM uses an Microsoft SQL Server database to keep track of
configuration information and file locations. It is acceptable to either
have the SQL Server running on the same machine as the FLM server
or to have them on separate machines. However, management is simplified if SQL Server and FLM are located on the same machine. If FLM
and SQL Server are installed on separate systems, consider the connectivity between the machines and coordinated management. Any
changes to the SQL Server installation must take into consideration
the potential effects on the FLM configuration (reboots, stop/start of
the database, and so on).
Introducing File Area Networks
FLM Deployment Tips and Tricks
Performance and Scalability Tips
Theoretically, one instance of FLM can serve any number of active
storage devices. In practice, however, for best performance it is recommended that one instance of FLM supports three active storage
devices (two maximum if there are more than 3,000 to 4,000 users
accessing each of those storage devices). If the storage devices are
not heavily loaded, you can increase the number of storage devices
per FLM server. With FLM 3.0 and Data ONTAP, you should be
able to go beyond these numbers if you use the offline attribute. The
offline attribute dramatically decreases the load placed on both the
storage device and the FLM server.
NOTE: You cannot use multiple FLM servers for a single active storage
storage device in a load-balancing configuration. A given active store
must have one, and only one, instance of FLM supporting it.
The FLM server is contacted only when a file is opened, created,
renamed, or deleted on the active storage. Considering that the majority of the I/O activity on a storage device relates to read and write calls,
FLM does not appear often in the data path. Also, when FLM receives
a file open, request, it does not contact its local database in order to
check the file’s properties. When used with Data ONTAP or
greater, FLM 3.0 offers the option of being notified only for client
access to files that have the offline attribute set. This significantly lessens the traffic from the storage device to the FLM server and means
file opens occur faster in the case of non-migrated files. The tradeoff
of enabling this option, however, is disabling the FLM file blocking and
the automatic remigration features.
Another dimension to performance is the number of files that are
being migrated or restored, since these events involve the FLM server.
By tuning the migration criteria in FLM policies to accommodate file
access patterns you can minimize the number of files that are repeatedly migrated and restored. Also, schedule migration to occur during
off-peak hours so as to not cause network bottlenecks.
Another thing to consider when designing an FLM configuration is connectivity between the FLM server and storage devices. It is
recommended that a gigabit link be added to the storage device to
support the migration and restore activities of FLM. A gigabit subnet
can be used for the FLM server, active store, and the secondary storage system hosting the offline store(s). If this is done, it is
recommended that the FLM server be multi-homed such that it is
accessible from the production network. This allows the FLM server to
be easily administered should the need arise and enables the use of
Introducing File Area Networks
Chapter 8: Brocade FLM
DFS for secondary store failover. A separate subnet may not be
needed if an extra gigabit port is trunked with the existing links to the
production network. For capacity planning, FLM data transfers for
migration or restore operations typically range from 10 to 40 MB/sec.
In regards to filer performance, NetApp Data ONTAP fPolicy can cause
a performance impact to the active store depending on the kind of
activity on the filer. Typically, this performance impact is small. However, fPolicy requests are sent to FLM 2.1 whenever a user or
application opens, creates, or renames any file, regardless of whether
the file has migrated unless the offline attribute is utilized. Therefore,
when filers have very high CPU utilization and also have a large number of file open, create, or rename operations, filer CPU impact should
be considered when you are designing the complete solution. Note
that neither the fPolicy nor FLM are inserted into the CIFS read and
write operations that may be occurring on the filer; thus, these operations are not impacted.
With FLM 3.0 and Data ONTAP, the offline attribute can
change the above behavior. FLM 3.0 with Data ONTAP 7.0.1+ enables
support for NFS. NFS read and write operations are intercepted if the
file was migrated to secondary storage. The NFS read and write operations on non-migrated files are not affected.
File size also plays a role in performance. FLM generally provides
greater benefits when managing larger files. FLM adds an additional
stream to the source file, in addition to truncating it to one block for
the stub. The resulting file is two blocks plus an additional inode. On
NetApp filers, this represents approximately 8 KB. The recommended
minimum file size for migrations therefore is 12 KB.
You should analyze the mix of file sizes to ensure that an adequate
number of files are above the minimum. If the file sizes are between
8 KB and 12 KB, careful analysis should be performed to determine
the disk space savings.
Database growth can have an impact on FLM scalability. The impact
will be determined by factors such as record retention settings configured in the FLM GUI for “restore file details retention,” “deleted file
details retention,” “statistics retention,” “events retention” as well as,
the amount of migration, deletion, and restoration activity taking place
in the FLM installation. Depending on the statistics and grooming
parameters configured by the administrator, FLM can store records for
three to four million migrated files using the MSDE database, which
grows linearly (for example, if there are 10 million migrated files, the
database will need to be 5 GB to 7 GB).
Introducing File Area Networks
FLM Deployment Tips and Tricks
FLM is not dependent on the database being available in order to
restore migrated CIFS files to the active storage device. Once configured, FLM can restore files using the information from the active store
and the offline store, even if the SQL Server database is unavailable.
In the case of NFS data, the database is required to be operational in
order to restore migrated files.
NOTE: In environments where performance is a concern, implement
SQL Server on a separate machine.
Any time files are deleted or truncated on a primary storage, the opportunity for fragmentation exists, as new files may be larger than the
freed contiguous blocks now available. If the new files are larger than
these contiguous spaces, then that larger file will likely be placed in
multiple locations on the file system. Therefore, third-party utilities
may need to be run on the primary storage in order to defragment the
file system to increase performance.
Data ONTAP has commands such as “wafl scan” in pre-DOT 7.0 and
“reallocate” in DOT 7.0+, which can assist in defragmenting a volume.
However, consult a Network Appliance Engineer for guidance in using
these commands.
In some environments, rules may aggressively move user applications
(or files with the *.exe and *.dll extensions). The time required to analyze and/or restore those files may impact the responsiveness of that
application. Since many users may be sharing access to these application binaries, excluding *.exe and *.dll files from FLM/fPolicy may
increase performance.
If there is a gigabit link between the active store and the system hosting the offline stores, it does not matter if the active store and the
secondary system are in separate locations. However, if there is considerable distance between the active store and the secondary
storage, then latency can cause significant performance degradation.
Since the FLM server is contacted regularly by the primary storage, the
FLM server should always be located near the primary storage. If the
secondary storage is in a remote location, it is important that the rules
be tuned such that file restores occur infrequently to achieve optimum
Introducing File Area Networks
Chapter 8: Brocade FLM
Setting the Offline Attribute
FLM 3.0 with Data ONTAP offers the option of only receiving
fpolicy notifications from the filer in response to client access to files
whose offline attribute is set. This means that opens of normal nonmigrated files will proceed straight through without triggering an fpolicy notification to the FLM server. Client access to non-migrated files is
faster and the load imposed on the filer by fpolicy is reduced. The
tradeoff is that file blocking and automatic remigration features are
disabled when offline notification is enabled.
NOTE: In mixed-protocol environments, in which FLM is handling both
CIFS and NFS data, use of the offline notification is required.
Setting the offline attribute on migrated files has additional benefits in
that this signifies to client applications that there may be a delay
before the file can be opened. It also makes Windows Explorer less
aggressive in opening or searching these files.
Auto-Remigration and Auto-Exclusion
These features help in dealing with unintended restore operations and
tuning migration criteria. Unintended restores may occur if users perform full-text searches over directories containing migrated files.
Rather than allowing files opened only during the process of a search
to linger on the active store, the auto-remigration feature allows these
files to be more rapidly sent back to the offline store rather than consuming space on the active store until the next time the migration
policy runs. Since the data for such files typically still resides on the
offline store at the time that the automatic remigration runs, it simply
re-references the copy of the file already residing on the offline store,
rather than having to copy the entire contents of the file back to the
offline store.
Auto-exclusion helps in automatically tuning migration rules. If FLM
sees files that are frequently going back and forth between the active
store and the offline store because of the mix of migration and client
access operations, FLM automatically tunes the migration rule to
exclude the problem files from future migrations. This allows these frequently used files to remain on the active store for faster client access.
Introducing File Area Networks
FLM Deployment Tips and Tricks
Security Tips
Where the active store and secondary storage systems are running in
different domains, the account running the FLM service will need to be
trusted and privileged in all the domains. In addition, the domain
account used by the FLM service needs to be a member of the following groups and devices:
The local admin group on the primary storage
The secondary storage hosting the offline stores
The machine that is hosting the FLM service
The system hosting an external SQL Server database being used
for FLM in installations where FLM is configured to use SQL Server
rather than MSDE
Permissions on the offline stores (shares) can be restricted so that
only administrators have access. In addition, these shares can be hidden to further protect them from unintended access. This does not
affect user-level access to the metafile on the primary storage, as long
as the account used for the FLM server service still has full access to
the secondary storage. There is no need for the users to have access
to the secondary store.
The ACLs for metafiles can be changed by either running the command
from the system hosting the FLM server or using the FLM restore
blocking features to avoid recalling files when modifying their ACLs.
NOTE: Modifying ACLs requires opening a handle to the file, meaning
that changes to ACLs made from command-line tools or Windows
Explorer will cause FLM to restore files unless they are prevented by
use of FLM restore blocking capabilities.
Implementation and Management
At this time, when a client requests a file that has been migrated, the
client must wait until the entire file has been restored. Where possible,
customers should study their workflow so that large files are restored
in advance of need.
Occasionally, a storage device may run out of inodes when FLM is
implemented. When FLM migrates a file, it adds a named alternate
data stream to the file, which contains the metadata required to
restore the file. The extra alternate data stream consumes an extra
inode in the primary storage file system. When the primary storage
runs out of inodes for a file system, it returns a “disk full” message to
Introducing File Area Networks
Chapter 8: Brocade FLM
applications that attempt to add a file (or alternate data stream) to the
file system. Disk-full conditions can also be reported in cases where
Data ONTAP quotas are restricting the account running the FLM server.
On NetApp filers, the number of inodes available on a file system can
be increased by changing the maxfiles value for the volume/file system. Refer to the NetApp filer product documentation or contact a
Network Appliance representative for details.
In general, it is recommended that the database be backed up on a
daily basis. It is possible to back up the FLM configuration without suspending FLM operations. The flmbackup and flmrestore utilities
accept command-line arguments specifying the location where the
database resides.
This functionality is described in documents available for download
Whether or not FLM Manager can be run on a scheduled basis
depends on the administrator’s concern about orphaned files in the
secondary storage. It also depends on how users delete metafiles. For
example, if a user selects a file in Windows Explorer and then presses
the Delete key (or selects Delete from the File menu), Explorer opens
the file as soon as the user selects it. This means that the file is
restored to the active store before it is deleted; therefore, there are no
orphan files left on the offline store. If a user selects and deletes a
directory, then the metafiles in the directory are not restored before
they are deleted. This can lead to orphan files on the offline store.
It is recommended that you run the FLM Manager periodically (typically monthly) using the option to check against the database. The run
time will increase with the number of migrated files, so it may be desirable to initiate the scan at the beginning of the off-peak period. Files
that are missing their metafile are detected and listed as errors. These
files can be displayed by selecting Missing Active Files in the Show filter. The list can be further narrowed by entering a directory name in
the name filter and clicking the Find button. Once the collection of files
is narrowed down to those that should be cleaned from the secondary
storage, select the remaining files and click Delete. The administrator
can also choose to repair the metafiles by clicking Repair or restore
the files by clicking Restore.
Depending on how actively the FLM server is migrating and restoring
files in combination with the amount of data under management, it is
recommended that the indices on the SQL Server database be defragmented if FLM performance starts to suffer.
Introducing File Area Networks
FLM Deployment Tips and Tricks
During the first (major) migration, there is a recommended workflow:
1. Scan shares with initial estimates for the rule settings. Keep a target amount of free space in mind.
2. Tune the policy such that the amount of file data selected by the
scan closely equals the amount of free space desired.
3. Some settings to consider:
At the Active storage level, consider and enter values for:
File Retention—Refer to the discussion of grooming on
page 164 and the interaction between the life span of minifiles in snapshots or tape backups and data in the offline
Restores Per Minute—In general, restore throttling should be
avoided unless you specifically need it for your site. Leave this
option turned off unless the restore activities impact the network or storage device load. If this happens, then the policy
may need to be examined for rules that migrate files that are
accessed frequently. Restore throttling can be helpful to mitigate the impact of a user running full text searches across
migrated files and thereby triggering mass file restore operations. Administrators need to decide whether they want to
deny users this type of search access to their migrated files by
using restore throttling.
File Blocking—Based on the judgement of administrators, for
example, marketing user folders are allowed to contain MP3
files, but all other shares are not (assuming that marketing
has a legitimate business reason for having MP3 files, while
others do not).
Blocked Restores—Set to the IP address or user account of
machines running scheduled full virus scans or CIFS-based
backups, file indexing, reporting, or other applications which
open large numbers of the files on the active store via CIFS.
If there are files that will never be migrated, such as frequently used executables stored on the storage device, you
may want to exclude files with these extensions from scanning
via the Data ONTAP fpolicy command at the storage device
command prompt. This can speed client access to these files
when they are needed. It is important that when inclusions or
exclusions are configured on the fpolicy command line, that
FLM migration rules are configured so that files eliminated
Introducing File Area Networks
Chapter 8: Brocade FLM
from fpolicy scans are never migrated. If such files are
migrated, they will not be restored in response to client
access operations.
At the policy level, consider and enter values for:
Exclude directories—There are two key scenarios in which
directories would be excluded from a scan (for migration):
When the directory contains backup files, administrator
files, system files, and other files essential to the operation of the storage device.
Directories that contain data pertaining to live projects. It
is recommended that any temp and system directories be
NOTE: FLM automatically excludes snapshot (~) directories
from migrations.
Migration bandwidth—It is recommended that you leave this
blank unless the migration is taking too much bandwidth or
the administrator needs to run the migration during business
Schedule—It is recommended that you leave this at the
default value of zeroes initially as the initial migration (usually
the most significant) is a planned data movement exercise.
After the initial migration, the schedule can be set to run
weekly or monthly during off-peak hours (when users or applications are not accessing the file system).
Scan Only—It is recommended that you leave this setting at
No so migrations can be performed. This policy can be set to
Yes when the administrator does not want to effect any real
data movement (migration or deletion) such as when running
“what if” scenarios to evaluate the potential effects of a
At the rule level, consider and enter values for:
Migration Text—This message text is shown to the user when
the user accesses a metafile and the file contents cannot be
restored. It is recommended that you leave this blank if prolonged outages of the offline store are not expected. This
setting can be used to create custom scripts that tell if a file
has been migrated or not based on the text in the metafile.
Introducing File Area Networks
FLM Deployment Tips and Tricks
Action—If delete is used, it is best to also select the option to
“Copy to offline storage before deleting” as a wastebasket
approach to deletion. This covers any mistakes in configuring
the policy. See the discussion of grooming on page 164.
Disabled—It is possible to disable individual rules within a policy. When disabling individual rules, it is important to
understand that other rules in the policy will run despite a single rule being disabled. Consider a policy whose first rule
migrates foo*.doc and bar*.doc to offline store A and whose
second rule migrates *.doc files to offline store B. When both
rules are enabled, *.doc files are migrated to separate offline
stores based on whether they match foo* or bar* or *.doc. If
the first rule is disabled and the policy is run, all the *.doc
files will match the second rule and be migrated to offline
store B.
Scope—Most rules should be scoped. The rule will affect
shares that are entered. See also the discussion below on
rule setup.
High Watermark—It is recommended that you leave this at the
default value of 0 percent (off) unless a certain percentage of
usage for the volume hosting the share should be used as a
trigger for performing migrations.
Include files—Use to specify migrations based on certain types
of files.
Exclude files—Excludes database files and certain executables (for example, *.exe, *.bat, *.js, *.com).
File Age Criteria—Use the Brocade FileAges utility to obtain an
understanding of the relative age of files on a given share, to
formulate the initial estimates.
Creation Date (CIFS only)—The date the file was created.
This is the least useful indicator for administrators to
assess the suitability of a file for migration. Separately, an
irregularity could potentially occur such that the creation
date could look as if it were later than the modified date.
This would most likely be a result of the file having been
restored from another system. In that case, the file system creates the file with a new timestamp, but keeps the
timestamp for the modification date from the previous
Introducing File Area Networks
Chapter 8: Brocade FLM
Access Date—The date the file was last opened by a user
to read its contents. This date is also updated when certain applications and processes, such as backup or
reporting, access the file. If you use this value in setting
migration criteria, 180 days is the recommended cutoff.
Modified Date—The date the file was last written to.
Whereas the access date can be misleading (for example:
all files could show as being accessed in the last 30 days,
because an application has scanned the entire file system), the modified date can be used as one of the key
parameters for migrating data to secondary storage. If
you are using this date, 180 days is the recommended
File Size Criteria—For performance reasons, experiment with
the minimum and maximum file size settings with an eye
toward migrating the maximum amount of data with the least
amount of files. In order to avoid long restore windows and
potential network bottlenecks, some large files should be kept
on the active store rather than being migrated to the offline
store. When determining file size criteria, consider that NFS
metafiles are 4094 bytes, that CIFS metafiles consume two
inodes (for main and alternate data streams) plus 4 KB if no
migration text is configured or 8 KB if migration text is
File Attribute Criteria—It is recommended that you set the Hidden and System options to Not Set such that no hidden or
system files are migrated.
Keep at least 15 percent of free space on the active store to accommodate restore operations of migrated files. In addition, Network
Appliance may recommend a certain amount of free space for performance reasons. Contact your Network Appliance representative for
additional information.
To prevent thrashing, start with one week as a check to see how many
restore actions are generated within a week of files being migrated (a
bar chart is also available for analysis of restores). If the policy is agebased, then too many restores may mean that you will want to adjust
the file age criteria, for example, increase from 120 days to 180 days
or vice versa.
Introducing File Area Networks
FLM Deployment Tips and Tricks
Rules should be ordered from specific to general so that the specific
rule has the first opportunity to act on files before the general rule is
invoked to process the remaining files. For example, a rule to migrate
*mkt*.doc files should be earlier in the order than a rule to migrate
*.doc files in the same policy. If files matching *.doc are migrated first,
no files will remain for the *mkt*.doc pattern to match.
The same logic applies to migrating files older than one year to
offline A and files older than two years to offline B. The administrator
needs to put the files older than two years onto offline B before running the sweep for files older than one year. If you do not migrate the
files in this order, everything older than one year will be migrated to
offline A.
A subdirectory copy always restores all migrated files in the subdirectory tree. This is the desired behavior, otherwise there would be two
metafiles pointing to the same offline file. When one of those metafiles is restored and the offline file is groomed, the other metafile
would be orphaned.
A subdirectory move may or may not restore migrated files in the subdirectory tree. If the move is within the same share, metafiles can be
moved as desired. For example, renaming the subdirectory from a
command prompt (or Windows Explorer) renames the subdirectory
without attempting to copy the files, thus leaving the metafiles in
place. Similarly, using the rename command to move the subdirectory
from a command prompt (or using Windows Explorer to drag and drop
the subdirectory) within the same share preserves migrated files.
However, if one or more files and/or subdirectories within the directory
tree are in use, the move operation copies as many files as possible,
then prompts with a “cannot delete source” message. This then
results in migrated files being restored, since a copy was really performed, even though the user intended to perform a move. The easiest
way to move directories within a share without restoring migrated files,
is to do so from the server where FLM is installed or use the restore
blocking feature.
There are two ways to locate a specific file among migrated, restored,
or deleted files. In either case, the administrator can use wildcard
characters (*) as part of the search:
Use the search parameter in the various views in the FLM GUI.
Use the Migrated File Tool in the FLM Manager, in which the view
can be filtered to display only migrated files prior to the search.
Introducing File Area Networks
Chapter 8: Brocade FLM
If the FLM database logs fill up, you have options:
Turn-off the logging of transactions in SQL Server.
In the case of disaster recovery in response to database corruption, FLM can restore migrated files using the information in the
files on the online and offline storage as long as the offline location has not changed since the files were migrated. So, you can
turn off the transaction log. Brocade advises you to take frequent
FLM configuration backups in this case.
Use frequent transaction log backups.
You can schedule a weekly or biweekly database backup and daily
transaction log backup. This will free up the space for the transaction logs and if you lose the database for some reason, you can
recover the data by restoring the last database backup and then
restoring every log backup since the database backup.
Use checkpoints.
Checkpoints flush “dirty” data and log pages from the buffer
cache of the current database, minimizing the number of modifications that have to be rolled forward during a recovery.
Checkpoints save time in a subsequent recovery by creating a
point at which all modifications to data and log pages are guaranteed to have been written to disk.
Defragment the database.
This does not require stopping the FLM service, but take into
account that this process is typically slow. It has been known to
take more than a day to defragment some databases.
Re-index the database.
This requires the FLM service to be stopped, but is faster than
It is generally acceptable to switch to the “simple recovery model,”
because a “point in time” restoration of the database is not needed. If
the database is lost, FLM can still restore the migrated files as long as
the offline store has not been changed since the files were migrated.
To restore previously configured policies and rules, FLM configuration
backups need to be taken. In the event of database loss, the only thing
that will not be restored is the list of recently migrated and restored
files, since this information is stored in the database.
Introducing File Area Networks
FLM Deployment Tips and Tricks
Data Availability and Recovery
FLM can be used to support DR site configurations. In this case, the
DR site should have a similar configuration to the production site. For
example, both sites might have one storage device and one NearStore
device. If the production site goes down, there are steps to follow to
use FLM in the DR site. The following procedure assumes that the DR
site is a DR-only site and will be used only while the production site is
being brought back online. The DR site is not intended for normal dayto-day operations.
To prepare:
1. Install FLM on a standby system at the DR site.
2. Export only the FLM configuration information, not the complete
database, from the FLM machine at the production site to a location accessible from the standby FLM system at the DR site.
3. Start the FLM machine at the DR site and keep the service
If the production site goes down:
1. Import the FLM information from the location where it was saved
to the standby FLM machine at the DR site. In environments performing NFS migrations, the entire database must be restored at
the DR site. If only CIFS file data has been migrated, the restore
operation at the DR site need only restore the FLM configuration
2. Change the name of the active store and offline store to reflect
the names of the storage device and NearStore at the DR site.
NOTE: This step is not needed if you are using vFiler DR (NetApp
MultiStore) to fail over from NetApp filers at the production site to
the filers at the DR site.
3. Enable the database lookup feature in FLM by adding a value to
the registry on the machine hosting FLM at the DR site:
Run regedit.exe.
Navigate to: HKLM\Software\NuView\FLM\Server key
Add a DWORD value to this key by right-clicking and selecting
the New-> DWORD value menu item. The value name is FPRequestLookupOffline and the value must be set to 0x1.
Introducing File Area Networks
Chapter 8: Brocade FLM
When the DR site becomes operational, the key objective of the
administrator is to ensure access to migrated data from the offline
store (at the DR site), as compared with running the file management
policies. Therefore, the standby FLM system at the DR site does not
require the complete FLM database. Only the configuration information is required so that the database lookup function in FLM returns
the correct location of the migrated file during the restore process. The
FLM backup script allows administrators to back up and restore only
the configuration information. Since the configuration information consumes very little space in the database, this removes the need for a
SQL Server instance at the DR site and allows use of the MSDE
instance that comes standard with FLM.
NOTE: In this configuration, the database at the DR site will not keep a
record of restore actions, since it does not have the detailed records
portion of the database.
At this point, for CIFS data, only the configuration portion of the database has been used at the DR site. When the production site is back
up (and the production storage device/NearStore have been synchronized with those at the DR site), some orphaned records will be
present in the FLM database. This is because some migrated files may
have been restored to the active store at the DR site, which will appear
as migrated files in the database of the FLM system at the production
site. These records can be easily cleaned up by using the Migrated File
Tool in the FLM Manager.
If the full database has been used at the DR site (as is needed for NFS
data), the only requirement at the production site is to re-import (to the
FLM system at the production site) the detailed records portion of the
database. Do not re-import the configuration portion of the database.
The configuration portion for the FLM system at the DR site includes
details for the storage device/NearStore at the DR site, which have different names from the storage device/NearStore at the production
The unavailability of the FLM server has no impact on access to files
that have not been migrated off the active store. If a standalone FLM
server goes down, migrated files will not be restored to the active storage device. Users who attempt to open a metafile will see the preconfigured message in the metafile (set by the FLM administrator during the FLM setup). Access to non-migrated files continues without
interruption in the absence of the FLM server.
Introducing File Area Networks
FLM Deployment Tips and Tricks
FLM can be installed on a Microsoft Cluster Server (MSCS) cluster for
high-availability. The key factor in deciding whether clustering is
required is the duration of time for which an IT department is willing to
accept denial of access to migrated files. If denial of access to
migrated files is not an option, then it is recommended that you cluster
the FLM server. If a certain amount of downtime for migrated files is
tolerable, the solution would fall under one of the following scenarios:
Configuration 1. The FLM server and its SQL Server database are
installed on separate machines. An additional Windows machine is
available to host FLM if the primary FLM system fails.
To prepare for potential failure of the FLM server:
1. Install FLM on a standby system.
2. Configure FLM on the standby system so that it will connect to the
SQL Server database when FLM is started.
3. Stop the FLM service on the standby system and set the FLM service to either disabled or manual startup in the Windows Service
Control Manager. This ensures that the FLM service will not start
inadvertently if the host system is rebooted. Only one FLM service
can be connected to a given database instance or active storage
at a time.
If the production FLM server fails:
1. Set the FLM service on the standby system to automatic startup.
2. Start the FLM service on the standby system. To prepare for potential failure of the FLM server host:
Install FLM and SQL Server on the standby system.
Export the FLM configuration information from the active FLM
machine to a location accessible from the standby machine.
The FLM backup script provides options allowing an administrator to back up only the configuration information (name
and settings for the active storage and offline stores), and not
other portions of the database including policy information
(details of policies and rules) and detailed records (records of
migrated, restored, and deleted files). If the standby FLM system is expected to be used as the primary system going
forward, export the full database (configuration information,
policy information, and detailed records).
Introducing File Area Networks
Chapter 8: Brocade FLM
Configuration 2. FLM Server A is co-hosted with SQL Server, with Storage Device A as its active store and Offline A as its offline store. FLM
Server B is co-hosted with SQL Server, with Storage Device B as its
active store and Offline B as its offline store. Both FLM Server A and
FLM Server B are at the same location.
If one of the production FLM servers fails, configure the failed server’s
active store as an additional active store in the surviving FLM server.
NOTE: Offline stores configured on the failed server must not have
been moved or renamed after files were migrated there.
The metadata stored in the metafiles created by the failed FLM server
are used by the surviving FLM server to accurately restore migrated
files in response to client access requests.
FLM supports DFS single active target links, which allows you to design
highly available second-tier architectures with minimal human intervention. Should a DFS link target be changed, the FLM server should
be restarted. The failure of the secondary storage will cause currently
running migration or restore tasks to fail, and they will have to be
In this configuration, you need to understand the importance of using
Fully Qualified Domain Names (FQDNs) when designating active and
secondary stores. Because of the inherent problems with NetBIOS resolutions, Microsoft decided to use DNS resolutions for AD. Issues can
arise if you specify \\NetBIOS_Name\Share versus \\FQDN\Share. If
left up to NetBIOS, the results of a lookup may not be accurate. Take
care to ensure that reverse name lookups return the expected results
(the IP address returned by referencing the specified name should
cause exactly the same name to return when a reverse DNS lookup is
performed with that IP address.)
Suppose you have a configuration with a storage device and a NearStore, in which SnapMirror is used to replicate data to the NearStore
and users have failed over to the NearStore on failure of the active
store. (Assume that the SnapMirror is broken as part of the failover
process.) What is the best action to take?
Under normal circumstances, in which the NearStore is hosting the
offline store, if users try to access the replicated metafile on the SnapMirror destination, they will see the message configured by the FLM
administrator in the metafiles. Before users begin using the NearStore
as an active store, the administrator will need to configure (add) the
NearStore as an active store in the FLM GUI. When users access metafiles on the NearStore, FLM will accurately restore the migrated file
Introducing File Area Networks
FLM Deployment Tips and Tricks
from the offline store (a share on the NearStore). When the SnapMirror
is reversed (SnapMirror resynchronization) and the metafiles on the
NearStore have been restored to the original active store, the restored
metafile will correctly reference the migrated data.
NOTE: When SnapMirror replicates a metafile from an active storage
device to any other storage device, it does not cause the underlying file
to be restored to the active store, because SnapMirror does not make
CIFS calls.
As long as the FLM service has not been restarted, CIFS file restore
actions and file blocking will still function even when the SQL Server
database goes down as long as the FLM server is still running. If the
FLM service is restarted, only CIFS file restores will still function. NFS
file restores require that the database be operational to function.
Administrators should consider how long metafiles can continue to
exist in snapshots and backup tapes when they are setting the retention period. If metafiles are restored from backups or snapshots and
are then accessed on the active store, it would be desirable to have
this access trigger a restoration of the actual file contents rather than
an error message. An administrator might want to retain migrated files
on the offline store, as long as metafiles referencing the offline content exist in any of these venues (snapshot or tape). For safety, the
grooming period in the FLM GUI on the Active Storage page should be
greater than the maximum snapshot schedule. Otherwise, when
SnapRestore restores an old snapshot containing a metafile, the result
is an orphan metafile whose migrated file might have already been
removed from the offline store.
The FLM server performs the database grooming. The database
grooming places no additional load on the active store or the offline
store and minimal load on the FLM server. Database grooming is run
at midnight using the grooming frequency and record retention values
configured in the FLM general settings page.
There are different ways to restore a metafile that has been accidentally deleted by a user, depending on how the user deleted the file:
1. If the user deleted an individual file, the file will have been
restored before deletion. In this case, the file has entered the
grooming period in FLM and will remain on the secondary store for
the retention period, after which it will be deleted. Refer to the
question in this section that discusses the grooming period.
Introducing File Area Networks
Chapter 8: Brocade FLM
2. If the user has deleted an entire directory, the files typically will be
deleted without being restored to the active storage device. In this
case, the migrated file will remain on the offline store until they
are cleaned up using the Migrated File Tool.
There are two main options for restoring the deleted metafile to the
active store, with slight differences depending on actions A or B, as
described above:
1. If the metafile is going to be recreated before the grooming period
expires (before the migrated file on the offline store is purged), the
administrator can use the Migrated File Tool in the FLM Manager
to locate the migrated file on the offline store. Once the migrated
file has been located, the administrator can recreate the metafile
on the active store. This is the most efficient way of restoring lost
If the metafile has not been recreated, and the grooming period
has passed, the administrator must locate the metafile on tape in
order to restore it (as would be done in the absence of FLM). The
administrator must also locate the copy of the migrated file on
tape and restore the copy to the secondary store. Once the metafile and the corresponding migrated file have been restored from
tape, the file contents can be restored to the active store either by
using Migrated File Tool or by simply opening the metafile on the
active store (which will cause the migrated file to be restored to
the active store).
2. The administrator can also use a snapshot to restore the metafile.
To restore a metafile from a snapshot directory, the use of single
file SnapRestore is recommended because it does not use CIFS
and thus will not trigger a restore of the file. An alternate approach
is to run Windows Explorer to drag and drop the metafile to its original location from the location in the snapshot. Versions prior to
FLM 3.0 required this operation be performed from the FLM
server or from a restore-blocked machine or account. With FLM
3.0, it is possible to drag the metafile from its location in the snapshot to a live file system from any client machine.
It is important that the security style of the default qtree be set to
NTFS on any volumes where CIFS metafiles are to be restored from
snapshots. This is required in order for the storage device to make the
FLM metadata stream visible to FLM on the metafiles contained in
snapshots. If this setting is not present, the contents of the mini-file
Introducing File Area Networks
FLM Deployment Tips and Tricks
will not be restored in response to a CIFS client access after the minifile has been copied from its location in the snapshot to the live file
Communicating with Users Before Rollout
The success of FLM depends on two main factors. The first is ensuring
that the right criteria are configured. The second is making administrators and end users aware of:
1. Changes in file sizes after migration such that users do not delete
files that are 0 KB in size (see Figure 59 on page 157).
2. The side effects of Windows Explorer on FLM. For example, users
need to be aware that certain actions in Windows Explorer will
cause a metafile to be restored.
The following actions in Windows Explorer will cause migrated files
to be restored:
Selecting the file via single- or double-click.
Displaying file properties via the context menu.
Placing the mouse cursor over a file long enough that Explorer
displays a tool tip for the file.
NOTE: Tool tips can be suppressed by deselecting “Show popup description for folder and desktop items” in the View tab of
the folder options dialog in Windows Explorer.
Opening files in Windows Explorer when it is displaying thumbnails of the file, for example, PowerPoint, BMP, and media
files when “Enable web content in folders” is selected in the
Folder Options dialog.
Performing a text search that results in the restoration of all
files scanned. When FLM is configured to set the offline
attribute on migrated files, users will have to select additional
options in the Windows Explorer Search pane before the contents of these files will be searched. Depending on the version
of Windows on the client machine, this option may be labeled
“search tape backups” or “search slow files”.
Introducing File Area Networks
Chapter 8: Brocade FLM
A good practice is to ask users to not use the thumbnails view and to
turn off tool tips in Windows Explorer. This should prevent unnecessary
restoration of file contents to the primary storage. Depending on the
particular environment, it is recommended that you use Group Policies
to enforce some of these settings in Windows Explorer.
Within Active Directory, Group Policy Objects (GPO) can be defined to
prohibit Windows Explorer searches. The following policy names can
be used:
Remove Search button from Windows Explorer
Remove Search menu from Start menu
Remove UI to change menu animation setting
Chapter Summary
Over the past few years, businesses have experienced unprecedented
growth in file data, and as data volume grows, it becomes more costly
to store the data and more complicated to manage it. Most often a
very large percentage of all stored files are not being used actively.
Storing this inactive data on primary storage devices is expensive, inefficient, and no longer necessary in NetApp environments if you deploy
Brocade FLM.
Brocade FLM is a data management solution that manages the lifecycle of file data on NetApp filers. It bridges the gap between primary
and secondary storage, uses automated policies to classify and manage file data, and simplifies compliance with information retention
regulations. FLM accomplishes this in three steps: classifying data,
moving inactive data to secondary storage, and transparently restoring
data to primary storage on demand. When FLM moves inactive files to
secondary storage, it creates a placeholder metafile on the primary
storage. When end users double-click the metafile, FLM restores the
file—taking just a little more time than as if it were the actual file.
Brocade FLM has a powerful policy creation, management, and execution engine. For example, you can configure automatic deletion of files
that have reached the end of their useful existence. Brocade FLM has
cost-saving benefits, and one of these is that fewer tapes need to be
used for backups. FLM can also be used to support disaster recovery
(DR) site configurations; if the production site should fail you can use
FLM in the DR site.
Introducing File Area Networks
Brocade UNCUpdate
Brocade UNCUPdate is discussed at a high level in “Chapter 3: Building Blocks” starting on page 31. This chapter provides more detailed
information about UNCUpdate in the following sections:
“System Requirements” on page 184
“How It Works” on page 184
“Deploying UNCUpdate” on page 185
“Troubleshooting” on page 189
“Chapter Summary” on page 190
Brocade UNCUpdate enables organizations to discover, report, and fix
files containing Universal Naming Convention (UNC) references, OLE
links, and shortcuts to storage resources before and during migration
and consolidation actions.
Brocade UNCUpdate provides location-independent access to file data
with the option to update UNC references to target data that has been
moved to new storage resources. And it can report on files that contain
UNC references to other files that are being considered for migration.
It can report on UNC entries encountered in a wide variety of files and
provide the option to update UNC references to point to the referenced
files at their new storage locations. You can perform tasks using the
UNCUpdate client or a DOS command-line.
Brocade UNCUpdate addresses the following entities:
OLE links
UNC references
Microsoft Word/PowerPoint/Excel files
Text files
Introducing File Area Networks
Chapter 9: Brocade UNCUpdate
System Requirements
Microsoft NET Framework 2.0
Microsoft Internet Explorer 5.5 or later
Microsoft Office 2003 required to update Microsoft Office
Microsoft Office XP required for updates when the new location is
in a domain-based DFS root
Windows Scripting Host 5.6
Microsoft Windows 2000 Server with SP2 or later, Microsoft Windows XP Professional, Microsoft Windows 2003 Server
At least 350 KB of free disk space
How It Works
Brocade UNCUpdate runs in two modes; search and modify:
• Search mode enables you to search files for UNC paths in both
ASCII and UNICODE encodings.
Modify mode enables you to search for and change the UNC
paths. In modify mode, UNCUpdate invokes the application that
created a file to modify it.
In a typical operation, UNCUpdate runs from a workstation to scan a
network share on a server for files containing references to UNC paths.
In environments where users may have shortcuts on their Windows
desktop targeting network resources, UNCUpdate can be run on users’
machines to search for and update these shortcuts.
In a data migration scenario, Brocade UNCUpdate scans directories for
files containing UNC paths that might need to be updated to reference
items at their new location as part of a migration. UNCUpdate enables
you to locate embedded UNC paths stored in a file in clear text and to
locate and modify UNC paths in text files, Microsoft Windows Explorer
shortcuts, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint
Introducing File Area Networks
Deploying UNCUpdate
Deploying UNCUpdate
Before you run UNCUpdate to modify files, make sure that the files you
want to modify are closed. UNCUpdate does not function correctly
when files are open while they are being updated.
Close all Microsoft Office applications before you run UNCUpdate to
modify Microsoft Office documents. UNCUpdate does not function correctly when Microsoft Office applications are open and UNCUpdate is
modifying Microsoft Office documents.
NOTE: You might find discrepancies in the number of UNC paths
reported by UNCUpdate when it performs a search. For example, some
applications store data in both ASCII and UNICODE. When UNCUpdate
performs a search on those files, it does not distinguish between formats and counts each UNC path found.
Command-Line Interface
You have the option of running UNCUpdate from a Command window.
Status information appears on the Command window and a report is
generated that lists the files scanned or processed.
The Desktop
The UNCUpdate client enables you to quickly create configuration templates from the Configuration tab.
Figure 62. Brocade UNCUpdate client, Configuration tab
Introducing File Area Networks
Chapter 9: Brocade UNCUpdate
When you run the configuration to scan for and/or modify UNC paths
in documents, you can click the Status tab to view the operation in
progress. After the operation is completed, you can click the Report
tab to see the list of files processed. Use the Toolbar or File menu to
perform operations for the active tab, such as save a configuration
template, status information, or a report.
NOTE: The UNCUpdate client can be resized for a better view of status
information and reports.
Search and Modify Modes
In search-only mode, the following is a typical task flow:
1. Create a template to search only for UNC paths.
2. Run the configuration. You can click the Status tab in the Console
to view information about the operation in progress.
3. View the output.
In modify mode, the following is a typical task flow:
1. Create a directives file.
2. Create a template to modify UNC paths.
3. Run the configuration. You can click the Status tab in the Console
to view information about the operation in progress.
4. View the output
The Directives File
The directives file specifies the search criteria that UNCUpdate uses to
search for files. You must specify a directives file when you run UNCUpdate in modify no action or modify mode. (The file is optional when you
run UNCUpdate in search mode.) When you do not specify a directives
file in search mode, UNCUpdate searches all the files in the path you
It contains XML instructions detailing how UNCUpdate searches or
modifies files. These instructions include wildcard patterns for file
inclusion/exclusion, the search strings describing the paths to be
located, and scripts or programs required for file modification.
Introducing File Area Networks
Deploying UNCUpdate
Files are selected for inclusion/exclusion based on the <Includes>
and <Excludes> tag sections. The following example illustrates the
use of these tags to exclude all EXE, ZIP, JPEG, MPEG and GIF
<Include name="All files" pattern="*.*"/>
<Exclude name="All EXE files" pattern="*.exe"/>
<Exclude name="All JPG files" pattern="*.jpg"/>
<Exclude name="All MPEG files" pattern="*.mpeg"/>
<Exclude name="All MPG files" pattern="*.mpg"/>
<Exclude name="All GIF files" pattern="*.gif"/>
<Exclude name="All ZIP files" pattern="*.zip"/>
The exclude list overrides the include list. If a file matches both
lists, it is excluded. The </Includes> section is not required. If it is
not present, UNCUpdate includes all files.
Search strings are specified in the directives file through the
<MatchTable> tag section. All searches are case-insensitive. The
\\HOUS0023\Users and \\SF0132\Oracle:
<Match src="\\HOUS0023\Users"/>
<Match src="\\SF0132\Oracle"/>
Introducing File Area Networks
Chapter 9: Brocade UNCUpdate
The <Modules> tag is used to associate a modification module
with a specific type of file. This tag is only used when a modify
operation is requested. The following is an example Modules
<Type ext=".txt" method="NVText" file="NVText.jhe"/>
You can create a template to search only for UNC paths or you can create a template to modify UNC paths. Before you run the template in
either of these modes, you must create a directives file that specifies
the search criteria (see above).
First you create and save a configuration template. You create a
template that will run in search mode because you only want to
know what files contain UNC paths in a location you specify. You
do not want to modify the files.
UNCUpdate provides two modify modes: Modify and Modify No
Action. Before you run the template in either of these modes, you
must create a directives file that specifies the search criteria.
Modify No Action. UNCUpdate scans for valid UNC paths that
match the search criteria specified in the directives file.
UNCUpdate outputs the list of files to the file you specify in the
Output File Path text box. Use this mode when you want to test
the search criteria specified in the directives file before
UNCUpdate modifies files.
Modify. After you test the directives file and make any necessary changes, you update the template to run in modify mode.
After you run the configuration template, UNCUpdate shows information about the files scanned in the Reports tab of the Console. You can
also see status information in the Status tab. After you review the
report of files scanned and status information, you can save the information into files. Normally, UNCUpdate generates the file in XML
format, but you can also generate the file in CSV.
Introducing File Area Networks
Important Notes
Before deploying Brocade UNCUpdate, consider the following:
UNCUpdate does not support files encoded in UCS-4 UNICODE
(32-bit UNICODE typically used only in very complex foreign character sets).
UNCUpdate does not support Microsoft Office documents that are
protected by internal passwords, protection, and read-only
options. When UNCUpdate encounters these files, they are
skipped and information about these files is logged to the XML
output file.
UNCUpdate does not support multi-byte search strings.
Only UNICODE UCS-2 little endian text files are supported.
Only Latin character sets are supported.
If you want to update shortcuts created with Windows XP or Windows Server 2003, you must run UNCUpdate on Windows XP or
Windows Server 2003 to make the updates.
Start by checking the health of your UNCUpdate configuration to
ensure that the following conditions are met:
Sufficient free space on the drive where UNCUpdate is installed
Sufficient free space on the drive where the output report file will
be written
The account used to run the UNCUpdate console has administrator privileges on the system where it is installed
The Allow Office Dialogs option enables you to stop running UNCUpdate when a Microsoft Office application opens a dialog box during the
process of opening a file.
Introducing File Area Networks
Chapter 9: Brocade UNCUpdate
Chapter Summary
Brocade UNCUpdate enables organizations to discover, report, and fix
files containing UNC references, OLE links, and shortcuts to storage
resources before and during migration and consolidation. It provides
location-independent access to file data with the option to update UNC
references to target data that has been moved to new storage
resources. It can also report on files that contain UNC references to
other files that are being considered for migration.
Brocade UNCUpdate scans directories for files containing UNC paths
that might need to be updated to reference items at their new location
as part of a migration. UNCUpdate enables you to locate embedded
UNC paths stored in a file in clear text. UNCUpdate enables you to
locate and modify UNC paths in text files, Microsoft Windows Explorer
shortcuts, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint
documents. You can perform tasks using the UNCUpdate client or a
DOS command-line.
Introducing File Area Networks
This appendix provides reference material for readers who occasionally need greater detail. It includes an overview of some of the more
notable items in the hardware and software product lines, and some
of the external devices that might be connected to FAN equipment.
Ethernet and IP Network Equipment
Consult the manuals for the products described below for specific
information and procedures.
Ethernet L2 Edge Switches and Hubs
It is possible to use commodity 10/100baseT hubs and/or switches to
attach to Ethernet management ports. It is not recommended that you
use hubs for data links.
IP WAN Routers
When connecting to a WAN, it is usually necessary to use one or more
IP WAN routers. These devices generally have one or more Gigabit
Ethernet LAN ports and one or more WAN interfaces, running protocols
such as SONET/SDH, frame relay, or ATM. They almost always support
one or more IP routing protocols like OSPF and RIP. Packet-by-packet
path selection decisions are made at layer 3 (IP).
Figure 63 shows an IP WAN router from Tasman Networks. (http:// There are many other vendors who supply
IP WAN routers, such as the Foundry Networks Modular Router shown
in Figure 64.
Make sure that the WAN router and service are both appropriate for
the application. Two considerations to keep in mind when selecting a
WAN router are performance and reliability.
Introducing File Area Networks
Appendix A: Reference
Figure 63. Tasman Networks WAN Router
Figure 64. Foundry Networks Modular Router
Finally, for redundant deployments it is strongly desirable for a WAN
router to support a method such as the IEEE standard VRRP. Such
methods can allow redundantly deployed routers to fail over to each
other and load balance WAN links while both are online.
Storage Equipment
RAID Arrays
For the most part, RAID arrays are used to form one or more logical volumes based on one or more of the following systems:
RAID 0. Multiple disks are grouped together into a non-redundant volume with a striping algorithm. This is similar to concatenating, but
striping improves performance as well as disk utilization efficiency.
The downside to striping is that the loss of any single disk in a RAID 0
group will cause the entire group to become inaccessible, and the data
on the volume will be lost. Most RAID 0 implementations use a simple
round-robin method to spread data across disks.
RAID 1. More commonly known as disk mirroring, a RAID 1 set consists of two or more physical disks that contain exact duplicates of the
same data. The RAID controller is responsible for setting up and maintaining data synchronization within a RAID 1 set.
Introducing File Area Networks
Storage Equipment
RAID 5. This system is like RAID 0 in that data is striped across multiple physical disks, but RAID 5 adds a parity element for redundancy.
There are a number of different algorithms for laying out RAID 5 volumes, but the upshot is that such a volume can lose any single disk
without causing an outage and data loss for the entire volume.
Instead, performance is degraded until the failed disk is replaced. Losing a second disk would cause an outage, so there is exposure until
the replacement is completed. RAID 5 volumes tend to run slower than
other options even when they are not in “degraded” mode because of
the extra overhead associated with calculating and writing parity data
with each operation.
RAID 1+0; 5+1. Many arrays allow combinations of RAID levels. For
example, it is possible to configure physical disks in mirrored pairs
(RAID 1), then stripe mirrors together (RAID 0). This combines the performance of RAID 0 with the reliability of RAID 1. Depending on how it
is done, this is known as RAID 0+1 or RAID 1+0. These solutions are
more expensive than RAID 5 because they use more physical disk
space per usable block of data, but they are also faster and more reliable.
It is also possible to mirror entire arrays together. In this case, each
array may be presenting one or more RAID volumes, which are then
mirrored or replicated between arrays. This is a popular approach for
high-end DR solutions, since each array may be located in different cities for redundancy in the event of a major disaster. In this case, it is
also important that each site still has internal redundancy to handle
most events without triggering a DR site-to-site failover. [1+0]+1 and
[5]+1 are common approaches for high end DR.
Introducing File Area Networks
Appendix A: Reference
Introducing File Area Networks
Namespace Requirements
There are a number of requirements for using the DFS Namespace
with Brocade MyView.
The dynamic root must be hosted on the Microsoft Windows 2003
Enterprise Edition or 2003 Datacenter Edition. Dynamic domainbased roots can be hosted on Microsoft Windows Server 2003,
Standard Edition if you apply the update available through
Microsoft. The update enables the standard edition to host multiple domain-based DFS root. See the Microsoft Web site for more
information about this update.
Users and Groups must be in an Active Directory based domain.
Administrating a standalone DFS root requires Local Administrator
privileges on the server hosting the root.
If there are shares in your environment which allow the contents
to be made available offline, and these are going to be available
through the namespace, it is important to check to see that the client operating systems are supported. The supported clients are
listed on the Microsoft’s Web site. Look for the “Distributed Filesystem Frequently Asked Questions” on
Microsoft DFS server software (Distributed Filesystem Service
“dfssvc.exe”) must be running on the server that hosts the DFS
root. Some Windows Server releases have this service stopped
and disabled by default.
You must have administrator permissions for the server on which
you create the standalone or domain-based DFS root.
Introducing File Area Networks
Appendix B: Namespace Requirements
When creating a domain-based root, Microsoft Active Directory
must already be operational. The account used for MyView must
have administrator permissions for the domain. The permissions
to the Active Directory subtree containing DFS information can be
delegated if it is impossible for the MyView server to run under an
account with domain administrator privileges.
When creating domain-based roots, it is strongly recommended
that you run MyView on a system that is a member of the domain
that hosts the root. Name resolution problems and permission
problems often manifest themselves in scenarios where the
administrative machine resides in a Windows NT4 domain and is
attempting to create a domain-based root in a separate Active
Directory domain.
Introducing File Area Networks
WAFS Sizing Guidelines
This appendix discusses TPA connections and Brocade WAFS CIFS sizing guidelines.
Supported TPA Connections with WAFS 3.4
Core Node
The maximum number of optimized TCP connections allowed per Core
is 2500. The maximum concurrent connections does not impact the
number of supported Edge WAFS nodes. The number of incoming connections can be from any number of Edge nodes, as long as the
maximum number of supported connections is not exceeded. Below
are some representative calculations for the number of supported
500 Edge users @ 5 connection per user = 2500 connections
250 Edge users @ 10 connections per user = 2500 connections
100 Edge users @ 25 connection per user = 2500 connections
NOTE: When the number of connections reaches the limit, new connections are passed through but not optimized.
Edge Node
The maximum number of optimized TCP connections per Edge is 500.
Since some applications use more than one connection for client/
server communication, the number of users supported does not
always equal the number of connections. Below are some representative calculations for the number of supported users based on the
number of connections used by an application:
500 users @ 1 connection per user = 500 connections
100 users @ 5 connections per user = 500 connections
Introducing File Area Networks
Appendix C: WAFS Sizing Guidelines
50 users @ 10 connections per user = 500 connections
NOTE: Admission control for inbound TCP traffic to the Edge is limited
to 500 connections. When the number of connections reaches the
limit, new connections are passed through but not optimized. The
Edge node optimizes new incoming connections after the number of
concurrent connections drops below 490 connections.
WAFS Sizing Guidelines for CIFS
Following are sizing guidelines for Common Internet File System (CIFS),
a standard remote file-system access protocol for use over the
Core Node
Single Core for every 60 Edges nodes, based on a Microsoft environment with Microsoft Office applications.
If you use TPA, a single Core for every 50 Edges nodes
Maximum of 5000 concurrent sessions per Core. Although a Core
node can support a maximum of 5000 concurrent sessions, the
number of Edge nodes connected to a Core node is limited to 60.
Edge Node
250 concurrent sessions per Edge node. A concurrent session
maps to 2 or 3 branch office users; this number can vary by plus
or minus 30 percent.
Includes Print, DC, DNS/DHCP services (4 GB of memory)
A maximum of 8 printers for each Edge node
CIFS connections are established per workstation, per mapped
drive, but are removed after 15 minutes of inactivity.
CIFS sessions assume a typical user mix of directory browsing, file
fetches, and “think time.”
WAN Throughput
8 Mb/sec for raw WAN throughput
24 Mb/sec for effective WAN throughput
Introducing File Area Networks
ANSI American National Standards Institute is the governing body for
standards in the United States.
API Application Programming Interfaces provide a layer of abstraction
between complex lower-level processes and upper-level applications
development. They facilitate building complex applications by providing building blocks for programmers to work with.
ASIC Application-Specific Integrated Circuits are fixed-gate microchips
designed to perform specific functions very well.
ATM Asynchronous Transfer Mode is a cell-switching transport used
for transmitting data over CANs, MANs, and WANs. ATM transmits
short fixed-length units of data. Characterized by relatively high performance and reliability vs. switched IP solutions.
Bandwidth Transmission capacity of a link or system.
Broadcast Transmitting to all nodes on a network.
Bridge Connects segments of a single network
Brocade Founded in 1995, Brocade rapidly became the leading provider of Fibre Channel switches. At the time of this writing, the
company carries switches, directors, multiprotocol routers, and a suite
of File Area Networking products.
CAN Campus Area Networks tend to be under a kilometer or so in size.
They are distinguished from LANs in that those tend to be in the ~100
meter range, but more importantly CANs cross between buildings. This
characteristic tends to imply thinner cabling, potentially higher speeds
running over that cabling, and higher locality.
CLI Command line interfaces are text-oriented methods of managing
Introducing File Area Networks
COS Class Of Service represents connection quality: a profile for
attributes such as latency and data-rate.
CRC Cyclic Redundancy Check is a self-test for error detection and
correction. All Brocade ASICs perform CRC checks on all frames to
ensure data integrity
CSMA/CD Carrier Sense Multiple Access with Collision Detection
defines how Ethernet NICs behave when two or more attempt to use a
shared segment at the same time
CWDM Coarse Wave Division Multiplexer. See also WDM on page 207
and DWDM on page 201.
Dark Fiber A leased fiber optic cable running between sites characterized by not having a service provided on the wire by the leasing
company. All services are provided by the customer.
DAS Direct Attached Storage is the method of connecting a single
storage device directly to one and only one host. In the enterprise data
center, DAS is obsolete and has been replaced by storage networking.
DFS Microsoft Distributed File System consists of software residing on
network servers and clients that transparently links shared folders on
different file servers into a single namespace for improved load balancing and data availability. NOTE: This should not be confused with
Transarc DFS product, which is a separate distributed file system product that is not interoperable with the Microsoft product.
DFS link Located under the DFS root. The DFS link forms a connection
to one or more shared folders, DFS links, or another DFS root.
DFS namespace The logical view of shared file resources seen by
users and the roots and links which make up the physical view. See
also Logical namespace on page 203 and Physical namespace on
page 204.
DFS root The share at the top of the DFS topology. The starting point
for the DFS links and shared folders that make up the namespace.
Directives file In Brocade UNCUpdate, specifies the search criteria
used to search for files; must be specified when you run UNCUpdate in
modify no action or modify mode. (The file is optional when you run
UNCUpdate in search mode.)
Introducing File Area Networks
Display name The default value or name for a user that appears in the
display name field in Active Directory or the default name of a folder or
share and the DNS name of a machine. When you want the
namespace to show the display name of an item, select the appropriate item:
(4)Grantor_Display Name
Display name rule Criteria you specify to change the display name of a
user, share, folder, or machine.
DoS Denial of Service attacks may be launched deliberately by a
hacker or virus, or may happen accidentally. Either way, the result is
downtime. The best approach for preventing DoS attacks in a SAN is to
follow security best-practices, and to use redundant (A/B) fabrics.
Domain-based root A DFS root that has its configuration information
stored in Active Directory. The root can have multiple root targets,
which offers fault tolerance and load sharing at the root level.
DWDM Dense Wave Digital Multiplexer. Allows more wavelengths than
a CWDM. See also WDM on page 207 and CWDM on page 200.
Dynamic root Roots created by MyView on the host servers you specify. MyView creates the roots to optimize the namespace by reducing
the number of links in the entry point root and consolidating repeated
link patterns. For example, if a namespace will contain more links in
the entry point root than DFS allows, MyView automatically creates
another root to balance the links between the entry point root and the
dynamically created root.
Entry point root The main root of your DFS namespace. The root can
be standalone or domain-based. It can be hosted on Windows 2000,
Windows Server 2003, or an NT4 server. The root must be empty.
Roots dynamically created by MyView to scale the namespace are referenced by this root.
Ethernet The basis for the widely implemented IEEE 802.3 standard.
Ethernet is a LAN protocol that supports data transfer rates of 10Mb/
sec. It uses the CSMA/CD to handle simultaneous access to shared
media. Fast Ethernet supports data transfer rates of 100 Mb/sec, and
Gigabit Ethernet supports 1 Gb/sec and there is also an emerging 10
Gb/sec standard.
Introducing File Area Networks
FC Fibre Channel is the protocol of choice for building SANs. Unlike IP
and Ethernet, FC was designed from the ground up to support storage
devices of all types.
FPGA Field Programmable Gate Arrays are similar to ASICs, except
that their hardware logic is not fixed. It is possible to reprogram an
FPGA in the field. Generally more expensive and possibly slower than
an ASIC, but more flexible.
Frame Data unit containing a Start-of-Frame (SoF) delimiter, header,
payload, CRC and an End-of-Frame (EoF) delimiter. The payload can be
from 0 to 2112 bytes, and the CRC is 4 bytes. When operating across
EX_Ports, the maximum payload is 2048 bytes.
FRS The File Replication service (FRS) is a multi-threaded, multi-master replication engine that replaces the LMREPL (LanMan Replication)
service in the 3.x/4.0 versions of Microsoft Windows NT. Windows
2000 domain controllers and servers use FRS to replicate system policy and login scripts for Windows 2000 and down-level clients. FRS can
also replicate content between Windows 2000 servers hosting the
same fault-tolerant Distributed File System (DFS) roots or child node
Hot Swappable Component that can be replaced while the system is
under power
IEEE Institute of Electrical and Electronics Engineers defines standards used in the computer industry
IETF Internet Engineering Task Force is the group that develops protocols for the Internet
ILM Information Lifecycle Management is the concept that information can be matched to resources most appropriate for its value at any
point in its lifetime
IP Internet Protocol is the addressing part of TCP/IP
IPSec Internet Protocol Security is a set of protocols that provide network layer security. This is often used to create VPNs. It may be used to
authenticate nodes, encrypt data, or both.
LAN Local Area Network; a network where transmissions are typically
under 5km.
Latency The period of time that a frame is held by a network device
before it is forwarded, ant the time that it sits on a cable between
devices. (The latter is usually only significant on long distance links.)
Introducing File Area Networks
Logical namespace The view of the namespace that users see when
they access their data through the namespace.
LUN Logical Unit Numbers are used to identify different SCSI devices
or volumes that all have the same SCSI ID. In Fibre Channel, LUNs differentiate devices or volumes that share a WWN/PID address.
MAN Metropolitan Area Networks typically cover longer distances than
LANs, but shorter distances than WANs. MANs may connect different
campuses within a city, or between cities in a closely linked group. The
size tends to be that of a metropolitan region: they can be tens of
miles in radius, but cannot generally be more than a hundred miles or
so. They tend to be provided by a single carrier from end-to-end,
whereas WANs may involve different carriers.
MMF Multimode Fiber is a fiber-optic cabling specification that allows
up to 500-meter distances between devices. MMF cables can have
either 50 or 62.5 micron optical cores. Generally used with SWL
MTBF Mean Time Between Failures is the average time between the
failure of any component in a system. This equates to how often service needs to be performed, and does not necessarily refer to how
often availability is impacted.
MTTR Mean Time To Repair is the average amount of time it takes to
repair a failed component.
Multicast Transmitting to a set of nodes on a fabric. More than one
(which would be unicast) and less than all (which would be broadcast).
This is often used in video applications.
Multiprotocol A device capable of using more than one protocol. For
example, a router that has both Ethernet and Fibre Channel interfaces
would be a multiprotocol router.
Namespace See DFS namespace on page 200.
Namespace path The location of an item in the namespace. The following example is the location of “LinkA” in the logical folder “L” in the
root “MyViewRoot” hosted on the server “”
NAS Network Attached Storage is a common name for network file
system (usually CIFS and/or NFS) servers that are specially optimized
for that task. Often the only difference between a NAS storage device
and a UNIX NFS server, for example, is packaging.
NIC Network Interface Cards connect a host’s bus to a network.
Introducing File Area Networks
OEM Original Equipment Manufacturers buy Brocade products, integrate them with other storage products such as disk drives, tape
libraries, and hosts, and resell them under their own brand names.
Physical namespace The view of a namespace that shows the configuration of the namespace, such as roots, links, and logical folders.
Publish The process by which MyView generates a namespace. The
first time a namespace is generated, MyView creates all the roots and
links that make up the namespace. Subsequent changes to the
namespace are incremental. Only the roots and links that change are
updated in the namespace.
QoS Quality of Service is a somewhat generic term that can refer to a
mechanism that can guarantee priority, bandwidth, latency, error rate,
and similar characteristics for the network path between nodes.
RAID Redundant Array of Independent (formerly Inexpensive) Disks. A
set of disks that looks like a single volume. There are several RAID levels used to solve different storage performance, scalability, and
availability problems. Most are fault-tolerant and/or high performance.
RAS Reliability Availability and Serviceability collectively refers to the
overall quality of a component, device, or network. Factors that influence RAS include things like the MTBF and MTTR of components,
software architecture, and redundant deployment strategies.
Redundancy Having multiple occurrences of a component to maintain
high availability
Relationship The association based on permissions between users/
groups and the shares and folders to which the users/groups have
access. A view must specify one relationship for MyView to calculate
the namespace.
RETMA Radio Electronics Television Manufacturers Association in the
context of storage networks is a standard specification for data center
racks. A typical rack-mountable network device is designed to fit into a
standard 19 inch RETMA rack, and its height is usually referred to in
terms of RETMA “rack units.” (Each unit is about 1.75 inches for
obscure historical reasons.)
Root replica See Root target on page 204.
Root target The share that contains the configuration of a root.
Domain-based roots can contain multiple root targets. The configuration of a domain-based root is replicated on all its root targets for fault
tolerance. A standalone root contains only one root target.
Introducing File Area Networks
Router Device for interconnecting at least two different networks into
an internetwork.
SAN Storage Area Networks link computing devices to disk or tape
arrays. Almost all SANs at the time of this writing are Fibre Channel
SCSI Small Computer Systems Interface as originally defined was a
family of protocols for transmitting large blocks up to of 15-25 meters.
SCSI-2 and SCSI-3 are updated versions of this. As the direct attachment of storage moved to a network model, SCSI has been mapped to
protocols such as FC and IP.
SCSI Inquiry SCSI command that generally causes a target to respond
with a string telling the requester information such as its make, model,
and firmware version. Used by the SNS to further identify Fibre Channel devices. The iSCSI Gateway Service inserts IP and IQN strings using
this SNS field.
Server pool Contain the servers that will host dynamic roots. When a
pool contains multiple servers, the root configuration of each dynamic
root hosted on a server is replicated on all the other servers in the
pool, providing fault tolerance.
Shares and folder collection Shows all the shares and/or folders in a
machine or domain you specify.
SilkWorm Registered trademark for the Brocade family of switches,
directors, and routers. Use of this term has been discontinued, for
example, the SilkWorm 48000 Director is now simply referred to as the
Brocade 48000 Director.
SMF Single Mode Fiber is a cabling specification that allows 10km or
even greater distances. SMF cables have nine micron optical cores.
Generally used with either LWL or ELWL media.
SONET/SDH Synchronous Optical Networks are used in MANs and
WANs. FC can be mapped to SONET/SDH. Characterized by high performance and reliability. The analogous service is called SDH in other
Standalone root A DFS root that has its configuration information
stored locally on the host server. The root has a single root target.
When the root target is unavailable, the namespace is inaccessible.
Data targeted by the namespace is still reachable either by virtue of
cached DFS referrals or direct access to the storage resources.
Storage Device used to store data like disk or tape
Introducing File Area Networks
Storage subsystem See Subsystem on page 206.
Storage virtualization See Virtualization on page 207.
Subsystem Synonym for storage device. Often external. On a SAN,
may be shared between many compute nodes.
SWL Short Wavelength Laser transceivers based on 850nm lasers are
designed to transmit short distances. This is the most common type of
T11 ANSI committee chartered with creating standards for data movement to/from central computers
Tapestry Trademark for a Brocade family of upper-layer products,
which included FAN products. Use of this term has been discontinued.
Target Disk array or a tape port on a SAN
TCP/IP Transmission Control Protocol over Internet Protocol is the communication method for the Internet
TCP Transmission Control Protocol is a connection-oriented protocol
responsible dividing a message into packets, passing them to IP, and
reassembling them into the original message at the other end. Detect
errors / lost data and triggers retransmission if needed.
TCP Port Addresses component that allow nodes to access a specific
service for a given address. There are many well-known ports that
allow standard upper-layer protocols like HTTP to work: Web servers
know to listen on the same port, and Web clients know to attach to
that port.
Topology The physical, logical, or phantom arrangement of devices in
a networked configuration
Transceiver Device that converts one form of signaling to another for
transmission and reception. Fiber-optic transceivers convert from optical to electrical.
Tunneling Technique for making networks interact where the source
and destination are on the same type of network, but there is a different network in between
ULP Upper-Level Protocols run on top of FC through the FC-4 layer.
Examples include SCSI, IP, and VI.
Unicast Sending a frame between just two endpoints. Distinguished
from broadcast and multicast where one transmitter has multiple
Introducing File Area Networks
Virtualization The manipulation and abstraction of devices such as
disks and tapes. Generally performs functions above and beyond
those performed by traditional RAID applications. Examples include
LUNs that span multiple subsystems, LUN over-provisioning where the
size of a presented LUN is larger than the underlying storage capacity,
and application-invisible data replication and migration. FANs virtualize file systems.
VLAN Virtual Local Area Networks allow physical network devices to be
carved into smaller logical segments. This is done in IP/Ethernet networks to prevent broadcast storms from creating instability.
VPN Virtual Private Networks use encryption to create tunnels through
public networks, so devices on one either end appear to be located on
a physically isolated network.
VRRP Virtual Router Redundancy Protocol allows one router to take
over the duties of another in the event of a failure. It can be thought of
as a router clustering method.
WAN Wide Area Networks span cities, states, countries, and even continents. They tend to have higher delays due to their longer distances.
WANs are often used by storage networks in disaster tolerance or
recovery solutions.
WAFS Wide Area File Services is a product designed to optimize cost,
manageability, reliability, and performance of storage volumes in
branch offices.
WDM Wavelength Division Multiplexers allow multiple wavelengths to
be combined on a single optical cable.
xWDM See DWDM on page 201 and CWDM on page 200.
Introducing File Area Networks
Introducing File Area Networks
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF