Business Continuity with Microsoft® Windows Server

“Our understanding of the status quo – how other backup utility
companies made their products work – was reversed when Acronis
came to our attention. Although they have an imaging product, they
didn’t say they were in business for imaging, but instead said they
offered a qualified enterprise solution for backup that can compete
against the big boys. They always give us a timely response when we
contact them, even though we’re a smaller company.”
— Brian Bouchard, IT manager of Tocci Building Corp., one of New
England’s largest building contractors.
Compliments of Acronis
Business Continuity
with Microsoft
Windows Server 2008
®
®
Updated Edition
© Easily back up, restore, and migrate physical
or virtual servers
© Learn Data Protection Strategies to Lower TCO
© Create Your Cost-Effective Disaster Recovery
Strategy with Acronis® Backup & Recovery® 10
Danielle Ruest
Nelson Ruest
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
Blind folio: i
Microsoft Windows Server 2008:
The Complete Reference
Second Edition
®
®
Danielle Ruest
Nelson Ruest
New York Chicago San Francisco
Lisbon London Madrid Mexico City
Milan New Delhi San Juan
Seoul Singapore Sydney Toronto
FM.indd 1
4/1/10 4:13:25 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
McGraw-Hill books are available at special quantity discounts to use as premiums and sales
promotions, or for use in corporate training programs. To contact a representative, please e-mail
us at bulksales@mcgraw-hill.com.
Microsoft® Windows Server® 2008: The Complete Reference, Second Edition
Copyright © 2010 by The McGraw-Hill Companies. All rights reserved. Printed in the United
States of America. Except as permitted under the Copyright Act of 1976, no part of this
publication may be reproduced or distributed in any form or by any means, or stored in a
database or retrieval system, without the prior written permission of publisher, with the
exception that the program listings may be entered, stored, and executed in a computer
system, but they may not be reproduced for publication.
All trademarks or copyrights mentioned herein are the possession of their respective owners and
McGraw-Hill makes no claim of ownership by the mention of products that contain these marks.
Copyright © 2000-2009 Acronis, Inc. All rights reserved. “Acronis,” “Acronis Compute with
Confidence,” “Acronis Backup & Recovery,” “Universal Restore,” “Secure Zone,” “Active
Restore” and the Acronis logo are trademarks of Acronis, Inc. Windows is a registered trademark
of Microsoft Corporation. Other mentioned names may be trademarks or registered trademarks
of their respective owners and should be regarded as such. Technical changes and differences
from the illustrations are reserved; errors are excepted. 2009-07
e-ISBN 978-0-07-175012-7
e-MHID
0-07-175012-6
Sponsoring Editor
Megg Morin
Technical Editor
Bob Kelly
Composition
Glyph International
Editorial Supervisor
Patty Mon
Copy Editor
Lisa McCoy
Project Manager
Madhu Bhardwaj,
Glyph International
Proofreaders
Bhavna Guptna
Susie Eklind
Illustration
Danielle Ruest, Resolutions
Enterprises Ltd. and
Glyph International
Acquisitions Coordinator
Jennifer Housh
Production Supervisor
Jean Bodeaux
Art Director, Cover
Jeff Weeks
Information has been obtained by McGraw-Hill from sources believed to be reliable. However, because of the possibility of
human or mechanical error by our sources, McGraw-Hill, or others, McGraw-Hill does not guarantee the accuracy, adequacy,
or completeness of any information and is not responsible for any errors or omissions or the results obtained from the use of
such information.
FM.indd 2
4/1/10 4:13:27 PM
About the Authors
Danielle Ruest is a workflow architect and author focused on people
and organizational issues for large IT organizations. During her
20-year career, she has led change management processes, developed
and delivered training, and managed communications programs
during a variety of projects. She has extensive experience in all aspects
of information technology and related business processes. Her focus is
meeting user and business needs with enterprise technologies. She
has been involved in virtualization technologies and projects for more
than 10 years. She is very familiar with enterprise technologies such
as directories, messaging systems, data storage, and collaboration
systems. Ms. Ruest is also involved as a freelance writer for several IT
publications.
Nelson Ruest is a senior enterprise architect and futurist. He has
extensive experience in migration planning and architectural design.
He was one of Canada’s first Microsoft Certified Systems Engineers
and Microsoft Certified Trainers. Mr. Ruest’s focus is on infrastructure
planning, including systems design, administration, and management
of complex networks and network or service architectures. He has
been instrumental in the creation of networks that provide back-end
services to organizations of all sizes. Mr. Ruest has been involved
in virtualization for more than 10 years and is recognized as an
authority on the subject. He has experience with both the private and
the public sectors. He is very familiar with network management,
intra- and extranet configurations and use, electronic mail, and data
systems, as well as office automation. Mr. Ruest is also involved as a
freelance writer for several publications, as well as a public speaker
for conferences.
Together, the Ruests are authors of more than a dozen books on
topics such as virtualization, Microsoft Hyper-V, Active Directory,
Exchange, Windows Server, Windows Vista, and more. Find out about
their published titles at www.reso-net.com/livreliste.asp?m=10.
Danielle and Nelson work for Resolutions Enterprises, Ltd.
(www.Reso-Net.com), a consulting firm focused on helping people
make more of IT technologies. With more than 20 years’ experience
in the computing industry, Resolutions Enterprises has developed a
series of procedures that help organizations master their IT systems.
This forms a methodology that covers everything from best practices
in systems administration, to establishing strategic alliances,
managing migration projects, and implementing an evolutionary
vision for an organization’s information systems.
FM.indd 3
4/1/10 4:13:27 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
Contents
Introduction
.....................................................
v
Build for Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Plan for System Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protect the Resource Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Protect the Virtual Service Offerings . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prepare for Potential Disasters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use Windows Server Redundancy Services . . . . . . . . . . . . . . . . . . . . .
Cluster Services for Resource Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cluster Services for Virtual Service Offerings . . . . . . . . . . . . . . . . . . .
Network Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multicast Versus Unicast Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single Affinity Versus No Affinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Install and Configure NLB Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . .
Windows Server Failover Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cluster Compatibility List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Server Cluster Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cluster Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Geographically Dispersed Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cluster Shared Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource Pool Failover Cluster Considerations . . . . . . . . . . . . . . . . . .
Further Server Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery Planning for Your Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery Strategies for Windows Server 2008 R2 . . . . . . . . . . . . . . . .
System Recovery Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Protection Strategies for Resource Pools . . . . . . . . . . . . . . . . . . .
Data Protection Strategies for Virtual Service Offerings . . . . . . . . . .
Select a Third-Party Backup Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Finalize Your Resiliency Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
2
6
6
7
8
9
11
12
12
13
16
16
16
20
22
23
23
30
31
32
33
33
35
37
37
39
Acronis and Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Virtues of Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why Virtualize? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Physical to Virtual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtual to Virtual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtual to Physical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Storage Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Virtualization in an SMB Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Planning for Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Acronis Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
What Is Acronis Backup & Recovery 10? . . . . . . . . . . . . . . . . . . . . . . .
40
42
42
43
43
44
44
46
48
52
52
iv
FM.indd 4
4/1/10 4:13:28 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
This eBook reflects the contents of Chapter 11 as presented in Microsoft® Windows Server®
2008: The Complete Reference in addition to an appendix by Acronis® on proven best
practices in backup and disaster recovery. Acronis makes it possible for your organization
to fully embrace virtualization by lowering administration overhead and simplifying the
job of protecting your digital assets.
Introduction
M
icrosoft has parlayed its Windows operating system (OS) into the most popular
operating system on the planet, despite the best efforts of its competitors. This
applies as much to the desktop as to the server operating system. Now, with the
release of an updated version of its flagship server OS, Windows Server 2008 Release 2
(WS08R2), Microsoft introduces a new benchmark in ease of use, integrated management
capabilities, complete security, and simplicity of deployment, as well as interaction with
other operating systems, such as UNIX and Linux. Make no mistake: Microsoft has invested
heavily in WS08R2 and has delivered a rock-solid foundation for any network, physical
or virtual.
One significant area Microsoft invested in is business continuity strategies, especially for
the virtualization engine included in the operating system, Hyper-V. With the advent of
significant disasters occurring everywhere on the planet—hurricanes, earthquakes, tidal
waves, and more—organizations are becoming more and more concerned about how they
can ensure that their operations continue despite, or rather in spite of, potential disasters.
This is the purpose of this eBook: to help your organization enhance its disaster-prevention
strategies when moving to Windows Server 2008 R2. In fact, this eBook builds upon three
key pillars of business continuity:
1. The source of the content contained in this book is Microsoft Windows Server 2008:
The Complete Reference, and it outlines several system protection strategies. It guides
organizations in the design of their network services based on Windows Server
2008, and it has been updated to include WS08R2 content.
2. The content included herein is entirely focused on system protection strategies,
both through the high availability and the backup and restore features included in
Windows Server 2008 R2.
3. The appendix provided by Acronis outlines proven best practices you can rely on to
further protect all of your systems with Acronis technology.
By employing these three strategies, you will be ready to meet the challenges of disaster
recovery so that your operations can continue. In addition, you will be taking advantage
of the most advanced IT strategies to build a secure and highly available network services
infrastructure.
v
FM.indd 5
4/2/10 9:10:07 AM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
vi
Microsoft Windows Server 2008: The Complete Reference
The Source of This eBook
This eBook, “Build for Business Continuity,” is an extract from Microsoft Windows Server
2008: The Complete Reference, published by McGraw Hill, and available in bookstores
everywhere. In fact, it is based on the content of Chapter 11 from this book. Unlike other
books on Windows Server 2008, The Complete Reference does not expound on each and every
feature of Microsoft’s new server OS. Since WS08 is a server operating system, this book is
structured around the strategy you would use to build a network from the ground up,
relying on the latest and greatest features offered by the new OS. As such, it is divided into
seven parts; each focused on one aspect of the implementation of a new server OS or a
migration from an existing OS to the new OS. The seven parts include:
• Part I: Tour Windows Server 2008, which covers the new feature set of Windows
Server 2008, as well as the interface changes built into the OS.
• Part II: Plan and Prepare, which helps you plan your network migration and begin
the server preparation process through a description of the new imaging and staging
capabilities in WS08.
• Part III: Design Server Roles, which provides guidelines for the elaboration of
network services, such as Active Directory, Internet, and remote connectivity, as
well as outlining how you put these core services in place.
• Part IV: Manage Objects with Windows Server 2008, which outlines the management
strategies you should use with WS08 to maintain and offer services to computers,
users, and services within your network.
• Part V: Secure Windows Server 2008, which focuses on the critical security elements
each network must put in place to protect the assets it contains. Even though this
section deals specifically with security, standard network security concepts are used
throughout the book.
• Part VI: Migrate to Windows Server 2008, which focuses on how to migrate existing
network components to a WS08-based infrastructure.
• Part VII: Administer Windows Server 2008, which provides a comprehensive set of
tasks for daily, weekly, and monthly administration of a WS08-based network.
Preparing a network is a complex process—even more so now that Windows is in its third
post-NT edition. With Windows NT, decisions were relatively simple because the choices
were limited. But with Windows Server 2008, this is no longer the case. It’s not surprising,
since the network has evolved from a loosely coupled series of servers and computers to
an integrated infrastructure providing and supporting the organization’s mission. This
evolutionary process is not unlike that of the telephone. At first, telephone systems were
loosely coupled. Today, worldwide telecommunications systems have converged with
Internet-based systems and are now much more complex and complete.
Similarly, networks are now mission-critical. The new organizational network has
become a secure, stable, redundant infrastructure that is completely oriented towards the
delivery of information technology services to its client base. These services can range from
simple file and print systems to complex authentication systems, collaboration environments,
or application services. In addition, these services can be made available to two differing
communities of users: internal users, over whom you have complete control of the PC, and
external users, over whom you have little or no control.
FM.indd 6
4/1/10 4:13:28 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
Introduction
vii
That’s why moving or migrating to Windows Server 2008 is much more of a network
infrastructure design project than one dealing simply with upgrading to a new technology.
Each time you change a technology that is as critical as the OS of your network, it is
important, if not essential, to use a complete and structured process, one that is outlined in
The Complete Reference.
To do this, The Complete Reference relies on some key processes:
• First, it outlines how to build a new network and then migrate from your existing
environment to the newly updated network infrastructure. This “parallel network”
approach simplifies the migration process and ensures that all of the technologies
you implement with Windows Server 2008 are implemented in their native mode,
taking full advantage of the new operating system’s capabilities.
• Second, it provides you with a structure for the implementation of 64-bit hardware.
Microsoft has made it official: This release of Windows Server 2008 is the first release
that only supports 64-bit hardware. This means you must prepare your move carefully
and learn to rely exclusively on 64-bit hardware.
• Third, it highlights how Windows Server 2008 R2 can work with legacy client
operating systems, but also points out where you can gain if you rely on Windows 7
as the client OS of choice. It is true: Windows 7 and Windows Server 2008 R2 work
better together, and this book highlights where and how.
• Finally, this book lets you move to the dynamic datacenter. With the release of
Windows Server 2008, Microsoft introduced its flagship virtualization technology:
Hyper-V. The integration of a virtualization layer, or hypervisor—the engine that
runs virtualization services—directly in the operating system has revolutionized
and popularized the use of server virtualization technologies in Windows shops
around the world. Such an important change must be an integral part of the new,
parallel network you build with WS08. Hyper-V has been enhanced significantly
since its original release, and especially with the release of Windows Server 2008 R2.
The latter aspect of Windows Server 2008 R2 forms the most significant impact this technology
has on IT datacenters, and may be the major reason why organizations will choose to move to
this new OS.
Build the Dynamic Datacenter
A dynamic datacenter is one where all resources are divided into two categories:
• Resource pools consist of the hardware resources in your datacenter. These hardware
resources are made up of the server hardware, the network switches, and the power
and cooling systems that make the hardware run.
• Virtual service offerings consist of the workloads that each hardware resource
supports. Workloads are virtual machines that run on top of a hypervisor—a code
component that exposes hardware resources to virtualized instances of operating
systems.
In this datacenter, the hardware resources in the resource pool are host systems that can run
between 10 and 20 guest virtual machines that make up the virtual service offerings.
FM.indd 7
4/1/10 4:13:29 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
viii
Microsoft Windows Server 2008: The Complete Reference
This approach addresses resource fragmentation. Today, many datacenters that are not
running virtual machines will most often have a utilization ratio that can range from 5 to
perhaps 15 percent of their actual resources. This means that each physical instance of a server
is wasting more than 80 percent of its resources while still consuming power, generating heat,
and taking up space. In today’s green datacenters, you can no longer afford to take this
approach. Each server you remove from your datacenter will save up to 650,000 kilowatthours per year. By turning your hardware resources into host systems, you can now recover
those wasted resources and move to between 65 and 85 percent utilization. In addition, the
dynamic datacenter will provide you with the following benefits:
• High availability Virtual workloads can be moved from one physical host to
another when needed, ensuring that the virtual service offering is always available
to end users.
• Resource optimization By working with virtual workloads, you can ensure that
you make the most of the hardware resources in your datacenter. If one virtual
offering does not have sufficient resources, fire up another hardware host and move
it to that one, providing the required resources when the workload demands it.
• Scalability Virtualization provides a new model for scalability. When your
workloads increase, you can add the required physical resources and control growth
in an entirely new manner.
• Serviceability Because of built-in virtualization features, your hosts can move one
virtual workload from one host to another with little or no disruption to end users.
This provides new serviceability models where you can manage and maintain
systems without causing service disruptions.
• Cost savings By moving to virtualization, you will earn savings in hardware
reductions, power reductions, and license cost reductions.
The result is less hardware to manage and a leaner, greener datacenter.
Run Physical or Virtual Machines
With the release of Windows Server 2008 R2 and its embedded virtualization technology, you
need to rethink the way you provide resources and build the datacenter. With the advent of
powerful new 64-bit servers running either WS08R2 Enterprise or Datacenter edition, it has
now become possible to virtualize almost every server role, with little or no difference in
performance, especially if you base your host server builds on the Server Core installation of
the OS. Users do not see any difference in operation, whether they are on a virtual or physical
machine. And with the advent of the hypervisor built into WS08R2, the virtual-versusphysical process becomes completely transparent. That’s because unlike previous Microsoft
virtualization technologies, which actually resided over the top of the operating system, the
new hypervisor resides below the operating system level (see Figure I-1).
To further support the move to the dynamic datacenter, Microsoft has changed the
licensing mode for virtual instances of Windows Server. This change was first initiated with
Windows Server 2003 (WS03) R2. In WS03R2, running an Enterprise edition version on
the host system automatically grants four free virtual machine (VM) licenses of WS03R2
Enterprise edition (EE). Add another WS03R2 EE license, and you can build four more VMs.
FM.indd 8
4/1/10 4:13:29 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
Introduction
FIGURE I-1
ix
The two different virtualization models
On average, organizations will run up to 16 virtual machines on a host server, requiring only
four actual licenses of WS03R2 EE.
Microsoft continues this licensing model with WS08 and WS08R2. The first Enterprise
edition license grants one license for the host and four licenses for VMs. The other three
licenses grant four more licenses for VMs. If you purchase the WS08R2 Datacenter edition,
you can run an unlimited number of VMs on that host. Remember also that the licenses
for VMs support any version of Windows Server. This means you can run Windows NT,
Windows 2000, or Windows Server 2003, as well as WS08.
Virtualization provides great savings and decreases general server provisioning timelines,
as well as reducing management overhead. For example, one system administrator can
manage well over 100 virtualized servers, as well as the hosts required to run them.
TIP For a complete step-by-step outline on how to install and deploy Hyper-V into your
organization, refer to Configuring Windows Server Virtualization by Ruest and Ruest.
Find more information at http://www.reso-net.com/livre.asp?p=main&b=Hyper-V.
Push the Virtualization Envelope
WS08R2 is designed from the ground up to support virtualization. This means that you have
the opportunity to change the way you manage servers and services. With the WS08R2
hypervisor, Hyper-V, there is little difference between a machine running physically on a
system and a machine running in a virtual instance. That’s because the hypervisor does the
same thing as a physical installation would by exposing hardware to VMs. The real difference
between a physical installation and a VM running on the hypervisor is access to system
resources. That’s why we propose the following:
• The only installation that should be physical is the hypervisor, or the Windows
Server Hyper-V role. Everything else should be virtualized.
• Instead of debating whether service offerings—the services that interact with end
users—should be physical versus virtual installations, make all of these installations
virtual.
• The only installation that is not a VM is the host server installation. It is easy to keep
track of since this one single installation is different from others.
FM.indd 9
4/1/10 4:13:30 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
x
Microsoft Windows Server 2008: The Complete Reference
• It takes about 20 minutes or less to provision a VM-based new server installation,
which is much quicker than that of a physical installation.
• Creating a source VM is easier than creating a source physical installation because
you only have to copy the files that make up the VM.
• The difference between a traditional “physical” installation and a virtual installation
is the amount of resources you provide the VM running on top of the hypervisor.
• All backups are the same—each machine is just a collection of files, after all.
In addition, you can take advantage of the Volume Shadow Copy Service to
protect each VM.
• All service-offering operations are the same because each machine is a VM.
• Because all machines are virtual, they are transportable and can be moved easily
from one host to another.
• Because VMs are based on a set of files, you can replicate them to other servers,
providing a quick and easy means of recovery in the event of a disaster.
• You can segregate the physical and virtual environments, giving them different
security contexts and making sure they are protected at all times.
• You can monitor each instance of your “physical” installations, and if you see that
one is not using all of the resources you’ve allocated to it, you can quickly recalibrate
it and make better use of your physical resources.
• Every single new feature can be tested in VMs in a lab before it is put into production.
If the quality assurance process is detailed enough, you can even move the lab’s VM
into production instead of rebuilding the service altogether.
• You are running the ultimate virtual datacenter because all systems are virtual and
host systems are nothing but resource pools.
With this in mind, The Complete Reference divides the tasks of preparing your new network
into resource pool and virtual service offering tasks. To facilitate this, the inside front cover
includes a Resource Pool table of contents that lets you move directly to the content that will
help you prepare hardware resources. This facilitates your move to the dynamic datacenter.
TIP Microsoft offers the Microsoft Assessment and Planning Solution (MAPS), which will
automatically scan your network to propose migration and virtualization strategies. Look for
“Microsoft Assessment and Planning” on www.microsoft.com/downloads.
Change Business Continuity Approaches
Running a dynamic datacenter lets every organization—from small to large—finally
address business continuity strategies that were formerly beyond their reach. Windows
Server 2008 R2 includes a multitude of features that are designed to support the dynamic
datacenter and provide full support for the business continuity strategies inherent in such
an environment. When your service offerings are virtualized, they become nothing more
than a set of files on a folder somewhere. This means you can rely on backup or replication
technologies to create secured copies of all your virtual machines in a remote location.
FM.indd 10
4/1/10 4:13:30 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Front matter
Introduction
xi
FIGURE I-2 Updated datacenter architectures based on virtualization technologies
You can also rely on WS08R2’s failover clustering features to provide guaranteed service
availability within the local or even remote datacenters. This lets you create layers of
services in your new datacenter (see Figure I-2).
But service availability alone is not enough. In order to have a complete protection strategy,
you must supplement WS08R2’s built-in features with additional tools—tools that will provide
support for other necessary operations in the dynamic datacenter. This is where Acronis comes
in. Through their high-availability toolset, Acronis provides you with the ability to manage
backups from any source—physical or virtual—and transform these backups into the core
elements that drive the dynamic datacenter. With the Acronis toolset, your backups can be
stored in a single location, which can then become the source of the transformation of your
datacenter. Acronis backups can be captured from either physical or virtual sources, and can be
restored to any target, once again, either physical or virtual. This transforms your backup tool
into a physical to virtual (P2V) (and vice versa) conversion engine. Such P2V, V2V, or V2P
engines form an essential part of tomorrow’s dynamic datacenter.
Together, Microsoft Windows Server 2008: The Complete Reference, the “Build for Business
Continuity” eBook, and the Acronis toolset can help you move to a new IT infrastructure—
one that can give you control over the availability of the services you deliver, once and for all.
FM.indd 11
4/1/10 4:13:32 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
A
significant element of security is system resiliency: ensuring that your services will
not fail, even in the event of a disaster or a security breach. Several elements of
system resiliency have already been covered thus far:
• Active Directory Domain Services Resiliency here is created through the
distribution of domain controllers throughout your network. It is also based on
the multimaster replication system and the creation of an appropriate replication
topology. And, with the Active Directory Recycle Bin, you can easily recover lost
information.
• DNS By integrating the Domain Name Service (DNS) service with the directory,
you ensure that your network name resolution service will always function because
it has the same resiliency as the directory service.
• DHCP Your address allocation infrastructure also has resilience built in because
of the way you structured it with redundant scopes. In addition, if you place your
Dynamic Host Configuration Protocol (DHCP) servers in different sites, you have
a solution that would continue to work in the event of a disaster.
• Windows Internet Naming Service (WINS) If you’ve decided to use them, your
legacy name resolution servers are also redundant, since the service is offered by the
same servers as the DHCP service.
• Object management infrastructure Your object management structure is also
resilient, since it is based on the organizational unit (OU) structure in the directory
and the directory service offers system resilience.
• Domain Distributed File System (DFS) roots Your file shares are resilient because
they are distributed through the directory, making them available in multiple sites.
They also include automatic failover—that is, if the service fails in one site, it
automatically fails user connections over to the other site. DFS replication ensures
that DFS namespace targets are synchronized at all times.
1
ch01.indd 1
4/1/10 4:19:12 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
2
Microsoft Windows Server 2008: The Complete Reference
• Volume shadow copies Your shared files, shared databases, Exchange stores,
and other shared information repositories are also protected through the Volume
Shadow Copy feature, taking system snapshots on a regular basis and even
allowing users to recover files themselves.
• Remote Desktop Services The Remote Desktop Services (formerly Terminal
Services) servers you deployed offer resilience through the Connection Broker
(formerly Session Broker), which is, in turn, protected through Connection Broker
load balancing.
Despite the fact that several of your systems are resilient, there remain areas that could have
significant impact on your operations if they failed. Remember, one of the main hacker
attacks is Distributed Denial of Service, or DDoS. This type of attack can succeed for two
reasons. First, the server hosting the service is not secured, and second, the service is hosted
by a single server, that is, there is no failover service. Secure your systems through the Castle
Defense System. Now, you need to add additional resiliency to the network through two
strategies: system redundancy and system recovery.
Plan for System Redundancy
System redundancy relies on the implementation of methods and measures that ensure
that if a component fails, its function will immediately be taken over by another or, at the
very least, the procedure to put the component back online is well documented and well
known by system operators. Some of the most common administrator headaches are
network security and disaster recovery. It’s not surprising. We’ve all faced disasters, such
as 9/11 and Hurricane Katrina, and we all know just how damaging these events can be to
businesses of any size. In fact, a vast majority of businesses that do not have a business
continuity plan in place and face a major disaster often go under since they cannot recover
from such catastrophic events. These issues are at the very core of any network design. No
matter what you do, you must ensure that your systems are protected at all times.
Once again, the Castle Defense System can help because it is a defense-in-depth
strategy. Layer 1, the data protection layer, helps you identify risk levels because it helps
you determine the value of an information asset. Risk is determined by identifying value
(the importance of an asset) and multiplying it by the risk factor that is associated with it.
The formula looks like this:
Risk = asset value * risk factor
For example, an asset that is valued at $1 million with a risk factor of .2 has a risk value of
$200,000. This means that you can invest up to $200,000 to protect that asset and reduce its
risk factor.
While these calculations can be esoteric in nature, what remains important is to invest the
most in the protection of your most valued assets. This is one reason why it is so important
to know what you have. Figure 1 is a good reminder of this principle.
Protect the Resource Pool
By focusing on physical protection, or protection of the resource pool, Layer 2, the physical
protection layer, also helps you plan for system redundancy. Random arrays of inexpensive
ch01.indd 2
4/1/10 4:19:12 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
3
FIGURE 1
Information asset
categories
disks (RAID) and random arrays of inexpensive networks (RAIN), for example, provide direct,
hardware-level protection for your systems. It is also important to include uninterrupted
power supply (UPS) systems at this level. This can either be individual Universal Serial Bus
(USB)-connected UPS devices (for regional servers) or power management infrastructures that
protect entire computer rooms (usually at central sites).
Resource pools also need redundancy. Each of the physical servers playing host to a
virtual service offering (VSO) must have some form of redundancy built in. If a host server
is running 10 to 15 or more VSOs, it must be able to fail these VSOs over to another physical
host in the event of a failure at the hardware level. This means the physical hosts must be
clustered—sharing the VSO workload so that VSOs are available to users at all times. This is
one reason why it is so important to host VSOs on shared storage. Because they are hosted
on a storage structure that each host server has access to, VSOs can be moved from host to
host with little or no impact on users. Such movements can be performed either during
maintenance operations on host servers (live migration) or during host server failures
(cluster failover). Note that the difference between a live migration—a movement of VSOs
from one host to another initiated by an administrator—and a cluster failover—a restart of a
VSO on another host in the cluster—is the impact on end users. In the case of cluster
failovers, the VSOs are restarted because they actually failed on the original host and,
therefore, users experience a minor loss in productivity. Live migrations do not have an
impact on end users because the in-memory contents of a VSO are moved from one host
server to another in the cluster; therefore, user operations are not interrupted. Clustered
host servers provide site-level redundancy (see Figure 2).
Site-level redundancy is also necessary at the regional level. This is why the ideal
regional server will be an all-in-one box that includes at least two physical host servers,
shared storage, and wide area network (WAN) connectivity. By including two host servers,
you can make sure the regional VSOs this infrastructure hosts will always be available (see
Figure 3).
As mentioned before, site-level redundancy is no longer sufficient. Too many organizations
literally lose everything when disaster strikes in an unprotected site. You don’t want all your
ch01.indd 3
4/1/10 4:19:13 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
4
Microsoft Windows Server 2008: The Complete Reference
FIGURE 2
Rely on shared
storage and
connected hosts to
provide site-level
redundancy.
eggs in the same basket. Fortunately, the advent of virtualization makes it much easier to
provide multisite redundancy. First, you need to build a second datacenter, if it isn’t already
available. This secondary datacenter does not need to host the same resources as your
production environment (see Figure 4). It just needs a modicum of resources—just enough, in
fact, to help you get back on your feet in the case of an emergency. This means it requires a few
host servers attached to shared storage. It also needs central power protection devices and
WAN connectivity.
FIGURE 3
Use all-in-one
boxes to provide
regional site-level
redundancy.
ch01.indd 4
4/1/10 4:19:13 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
5
FIGURE 4 Providing multisite redundancy
Build for Business Continuity
ch01.indd 5
4/1/10 4:19:14 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
6
Microsoft Windows Server 2008: The Complete Reference
Service level agreements (SLAs) for disaster recovery are not the same as those for normal
production. This means you can run a reduced infrastructure in the disaster recovery site. You
can rely on a formula to help you determine just how many physical resources your disaster
recovery center will require. The formula looks like this:
Production Resources/Recovery Time = Disaster Recovery Resources
For example, if you are running your infrastructure on 15 physical hosts and you expect
your recovery time to be three hours, you can run the disaster recovery center with five
physical hosts. The lower the recovery time, the more resources you will need to populate
your recovery center.
Balance the number of physical resources in your recovery center with the need to reload
critical services. In the event of a disaster, your recovery will require essential services first—
for example, Active Directory Domain Services and DNS—then load secondary services,
DHCP, file and print servers, and so on. Using a graduated approach for the reloading of
services for users will let you bring everything back online in stages and will reduce the
overhead cost of the secondary datacenter. In most cases, the recovery of your virtual service
offerings can follow the structure of the implementation of VSOs in this book.
TIP In many cases, you don’t need to have your own recovery datacenter. Some vendors, such as
SunGard (www.sungard.com), will offer hosting services for secondary datacenters. In most
cases, it is much cheaper to use a hosted recovery center than to build your own.
Protect the Virtual Service Offerings
The redundancy you build into your physical protection layer is only part of the solution.
You’ll need to ensure that you also have service redundancy. That can be accomplished
through service clustering, either at the network level or the server level. Finally, you’ll need
to provide data redundancy. This is done through the elaboration and implementation of
the backup and recovery systems. Here, it will be important to choose the right type of
backup solution, since you need to protect data that is stored not only in the file system, but
also within databases such as the Active Directory Domain Services.
Building redundancy in your systems is valuable only if you know it works. It’s not
enough to be prepared; you need to know that your preparation has value. To do so, you’ll
need to test and retest every single redundancy level you implement in your network. Too
many organizations have made the fatal error of backing up data for years without testing the
recovery process, only to find out that this recovery didn’t work. This is not a myth. It actually
happens. Don’t let it happen to you. Test all your systems and document your procedures. In
fact, this is an excellent opportunity for you to write standard operating procedures.
Prepare for Potential Disasters
There are two types of disasters: natural and manmade. Natural disasters include earthquakes,
tornados, fires, floods, hurricanes, landslides, and more. They are hard to predict and even
harder, but not impossible, to protect against. The best way to protect against these disasters is
to have redundant sites: Your core servers and services are available at more than one site. If
ch01.indd 6
4/1/10 4:19:15 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
7
your main datacenter is impaired for any reason, your other site takes over. This is also where
the concept of the failsafe server comes into play. This server is a standby server that is dormant,
but can be activated quickly if required. In the dynamic datacenter, this means providing
redundant resource pool servers and saving copies of each of the VSOs you run in production.
There are also manmade disasters: terrorist attacks, power failures, application failures,
hardware failures, security attacks, and internal sabotage. These attacks are also hard to
predict. Some require the same type of protection as for natural disasters. Others, such as
application and hardware failures and security attacks, can be avoided through the Castle
Defense System.
To determine the level of service protection you need to apply, you can use a service
categorization that is similar to the Layer 1 categorization for data:
• Mission-critical systems These are systems that require the most protection
because interruption of service is unacceptable.
• Mission-support systems These require less protection than mission-critical
systems, but interruptions should be minimized as much as possible.
• Business-critical systems These are systems where short service interruptions are
acceptable.
• Extraneous systems These are deemed noncritical and can have longer-lasting
interruptions.
What most people seldom realize is that the basic network infrastructure for your network
is, in many cases, part of the mission-critical level, because if it does not work, nothing
works.
Use Windows Server Redundancy Services
One of the areas that can add service resiliency is service redundancy. Redundancy services
are, in fact, one of the major improvement areas for Windows Server 2008 Release 2
(WS08R2). Windows Server redundancy services support two types of protection models:
• Network Load Balancing (NLB) This service provides high availability and
scalability for Internet Protocol (IP) services—both Transmission Control Protocol
(TCP) and User Datagram Protocol (UDP)—and applications by combining up to 32
servers in a single cluster. Clients access an NLB cluster or group of servers by
accessing a single IP address for the entire group. NLB services automatically
redirect the client to a working server. This feature is available in all versions of
WS08R2.
• Windows Server Failover Clustering (WSFC) This service provides resiliency
through resource failover: If a resource fails on one cluster member, the resource
is automatically transferred to another member in the cluster and clients are
automatically redirected to that cluster member. Server clusters can be composed of
between two and sixteen nodes. This feature is not available in the Standard and
Web editions of WS08R2.
These redundancy services work together to provide a complete service protection structure
(see Figure 5).
ch01.indd 7
4/1/10 4:19:15 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
8
Microsoft Windows Server 2008: The Complete Reference
FIGURE 5
Combining
protection
strategies to
improve
redundancy
When you build your resiliency solutions based on these two technologies, keep the
following in mind:
• When protecting stateless systems or systems that provide read-only services, rely
on network load balancing.
• When protecting stateful systems or systems that provide read-write services, rely
on failover clustering.
It’s a simple formula: Systems that do not persist data rely on NLB, and systems that persist
data rely on failover clustering.
Cluster Services for Resource Pools
The host servers in the resource pool are designed to persist data by default. That’s because
they store the virtual hard drives (VHDs) that make up the VSOs. Since the VHDs are data files
that are written to as the virtual machine operates, it is imperative that the high-availability
solution be designed to protect this data at all times. We’ve already discussed several ways this
data can be protected, including volume shadow copies and content replication. Now you need
to consider how the service that lets the VSOs run can be rendered highly available.
Since they persist data, host servers must run the Failover Clustering service to provide
support for high availability of the VSOs. This means that when one host machine fails, the
VSOs it runs are automatically failed over to another host server that is part of the cluster.
For this, you must build your host servers accordingly. Take, for example, the following
configuration:
• Host server 1 has 16 gigabytes (GB) of random access memory (RAM). It runs eight
VSOs at 1GB of RAM each.
• Host server 2 has 16GB of RAM. It runs eight VSOs at 1GB of RAM each.
ch01.indd 8
4/1/10 4:19:15 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
9
In a clustered configuration, each of the host servers must reserve a spare portion of RAM
in the event of a failover. When the failover occurs, the host that takes on the failed server’s
workload must have enough available RAM to run all of the services required for the
failover to work. This is why each of the host servers in this scenario reserves 8GB of RAM.
Clustered server configurations must, therefore, be planned carefully because they will
need to support each of the servers in the cluster configuration. Also, because the host
servers are physical host servers, they will need special hardware components to connect to
shared storage:
• Small Computer System Interface (SCSI) connectors will let two host servers
connect to shared storage. They may be appropriate for regional office host
server configurations because only two host servers are required. Note that cluster
configurations based on SCSI connectors are becoming rare.
• Fibre channel connectors through host bus adapters (HBA) are appropriate for
clusters of up to 16 nodes.
• Internet Small Computer System Interface (iSCSI) connectors are appropriate
for clusters of up to 16 nodes. In fact, iSCSI connectors, because they are network
connectors and are simpler to implement, are often the preferred connectors
for clustered hosts. That’s because it is easier and cheaper to add multiple network
cards into a host server to provide redundant paths for iSCSI traffic.
Because you are clustering host servers, you need to make sure that you are using the
appropriate edition of Windows Server 2008 R2. This means you need to use either the
Enterprise or Datacenter edition.
NOTE Both the Enterprise and the Datacenter editions include licensing for VSOs; therefore, you
should be working with one of these two editions anyway. Remember that Enterprise is priced
per server and Datacenter is priced per physical processor.
Cluster Services for Virtual Service Offerings
Virtual service offerings are also affected by clustering services because the virtualization
layer of Windows Server Hyper-V fully supports the emulation of shared hardware. NLB
clusters do not need any particular hardware since they rely on network interface cards to
work. Failover clusters, however, rely on iSCSI connectors. As mentioned earlier, you can,
therefore, create clusters with up to 16 nodes with these technologies. Because the Hyper-V
virtualization layer simulates the network interface cards required for either NLB or failover
clustering in virtual machines, you’ll find that clustering in VSOs will be more
comprehensive than in resource pools.
Table 1 outlines the features and supported services for each clustering mode for VSOs.
As you can see, NLB and failover clusters are rather complementary. In fact, it is not
recommended to activate both services on the same server; that is, a failover cluster should
not also be a member of an NLB cluster. In addition, NLB clusters are designed to support
more static connections. This means that an NLB cluster is not designed to provide the same
type of failover as a server cluster. In the latter, if a user is editing a file and the server stops
ch01.indd 9
4/1/10 4:19:16 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
10
Microsoft Windows Server 2008: The Complete Reference
Clustering Service
Network Load Balancing
Failover Clusters
WS08R2 edition
Web
Standard
Enterprise
Datacenter
Enterprise
Datacenter
Number of nodes
Up to 32
Up to 16
Resources
Minimum of two network
adapters
iSCSI disk connectors
Server Role
Application servers (stateless)
Dedicated Web servers
Collaboration servers (front end)
Identity management
(domain controllers)
Application servers (stateful)
File and print servers
Collaboration servers
(storage)
Network infrastructure
servers
Application
Web farms
Internet Security and
Acceleration Server (ISA)
Virtual private network (VPN)
Servers
Streaming media servers
Unified communications servers
SQL servers
Exchange servers
Message queuing servers
TABLE 1 WS08R2 Clustering Choices for VSOs
responding, the failover component will automatically be activated and the user will continue
to perform his or her work without being aware of the failure (there may be a slight delay in
response time). This is because the server cluster is designed to provide a mirrored system
to the user. But an NLB cluster will not provide the same type of user experience. Its main
purpose is to redirect demand to available resources. As such, these resources must be static
in nature, since NLB does not include any capability for mirroring information deposits.
Both redundancy services offer the ability to support four service-offering requirements:
• Availability By providing service offerings through a cluster, it is possible to
ensure that they are available during the time periods the organization has decreed
they should be.
• Reliability With a cluster, it is possible to ensure that users can depend on the
service offering, because if a component fails, it is automatically replaced by another
working component.
• Scalability With a cluster, it is possible to increase the number of servers
providing the service offering without affecting the service being delivered to users.
• Maintenance A cluster allows IT personnel to upgrade, modify, and apply service
packs, and otherwise maintain cluster components individually without affecting
the service level of service offerings delivered by the cluster.
ch01.indd 10
4/1/10 4:19:16 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
11
An advantage that failover clusters have over NLB clusters is the ability to share data.
Failover cluster resources must be tied to the same data storage resource, ensuring the
transparency of the failover process.
Clusters do have disadvantages. They are more complex to stage and manage than
stand-alone servers, and services that are assigned to clusters must be cluster-aware in order
to take advantage of the clustering feature.
Network Load Balancing
The basis of the NLB cluster is a virtual IP address: Client systems connect to the virtual IP
address, and the NLB service redirects the client to a cluster member. If a cluster member
fails or is taken offline, the NLB service automatically redirects requests to the other cluster
members. When the member comes back online, it automatically rejoins the cluster and
requests can be redirected to it. In most cases, the failover process—the process of redirecting
clients to other cluster resources when a member fails—takes less than 10 seconds.
NLB cluster members do not share components. They are independent servers that host
the same applications and identical local copies of the data client systems access. This is
why NLB is best suited to stateless applications—applications that provide access to data
mostly in read-only mode. NLB servers normally use two network interface cards. The first
is dedicated to cluster network traffic, and the second is for communications with clients
and other normal network communications. Cluster network traffic from the member is
mostly in the form of a heartbeat signal that is emitted every second and sent to the other
members of the cluster. If a member does not send a heartbeat within a time span of five
seconds, the other members automatically perform a convergence operation to remove the
failed member from the cluster and eliminate it from client request redirections.
TIP You can add more than one network interface card (NIC) for client access, but two are the
minimum configurations you should run.
NOTE If you decide to build hardware-based service offerings for some reason, consider this in the
selection of the hardware configuration for the servers. Since each cluster member uses identical
data, it is often useful to optimize the server hardware to support fast read operations. For this
reason, many organizations planning to use hardware-based NLB clusters do not implement
RAID disk subsystems because redundancy is provided by cluster members. Disk access is
optimized because there is no RAID overhead during read operations. It is essential, however,
to ensure that all systems are fully synchronized at all times. Whether you decide to construct
NLB servers without RAID protection is a decision you will make when designing your NLB
architecture. It will depend mostly on your data synchronization strategy, the type of service you
intend to host on the server, and the number of servers you intend to place in your NLB cluster.
Since the NLB servers are VSOs, you do not need to make any special hardware
considerations, though you should create identical virtual machines to provide the service
offering.
ch01.indd 11
4/1/10 4:19:16 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
12
Microsoft Windows Server 2008: The Complete Reference
The core of the NLB service is the wlbs.sys driver. This is a driver that sits between
the network interface card and network traffic. It filters all NLB communications and sets
the member server to respond to requests if they have been directed to it.
NLB is similar to round-robin DNS, but it provides better fault tolerance. Round-robin
DNS relies on multiple DNS entries for a service. When clients require a connection, the
DNS service provides the first address, then the second, then the third, and so on. It cannot
check to see if the address actually resolves. This is one reason why NLB is better. It always
checks destination addresses to ensure the server is available when redirecting clients. And,
since the NLB service is hosted by every cluster member, there is no single point of failure.
There is also immediate and automatic failover of cluster members.
Multicast Versus Unicast Modes
NLB clusters operate in either multicast or unicast mode. The default mode is unicast. In this
mode, the NLB cluster automatically reassigns the Media Access Control (MAC) address for
each cluster member on the NIC that is enabled in cluster mode. If each member has only one
NIC, member-to-member communications are not possible in this mode. This is one reason
why it is best to install two NICs in each server.
When using the multicast mode, NLB assigns two multicast addresses to the cluster
adapter. This mode ensures that all cluster members can automatically communicate with each
other because there are no changes to the original MAC addresses. There are disadvantages to
this mode, though, especially if you use Cisco routers. The address resolution protocol (ARP)
response sent out by a cluster host is rejected by these routers. If you use multicast mode in an
NLB cluster with Cisco routers, you must manually reconfigure the routers with ARP entries,
mapping the cluster IP address to its MAC address.
Whether you use one mode or the other, you should use at least two NICs on each
member. One advantage of doing so is that it allows you to configure one card to receive
incoming traffic and the other to send outgoing traffic, making your cluster members even
more responsive. You can also ensure that if your NLB cluster is only the front end of a
complex cluster structure, such as the one illustrated in Figure 5, all back-end communications
are handled by the nonclustered NIC.
Finally, if your NLB members are expected to handle extremely high traffic loads, you
can add more NICs in each member and bind the NLB service to each one, improving the
overall response time for each member.
Single Affinity Versus No Affinity
NLB clusters work in affinity modes. Each refers to the way NLB load-balances traffic. Single
affinity refers to load balancing based on the source IP address of the incoming connection. It
automatically redirects all requests from the same address to the same cluster member. No
affinity refers to load balancing based on both the incoming IP address and its port number.
Class C affinity is even more granular than single affinity. It ensures that clients using multiple
proxy servers to communicate with the cluster are redirected to the same cluster member at all
times. No affinity is useful when supporting calls from networks using network address
translation (NAT) from IPv4 transmissions because these networks only present a single IP
address to the cluster. If you use single affinity mode and you receive a lot of requests from
NAT networks, these clients will not profit from the cluster experience, since all of their
requests will be redirected to the same server. IPv6 connections can run in any affinity mode.
ch01.indd 12
4/1/10 4:19:17 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
13
However, if you use an NLB cluster to provide VPN connections using either Layer 2
Tunneling Protocol/Internet Protocol Security (L2TP/IPSec) or Point to Point Tunneling
Protocol (PPTP) sessions, you must configure your cluster in single affinity mode to ensure
that client requests are always redirected to the same host. Single affinity should also be
used for any application that uses sessions lasting over multiple TCP connections to ensure
that the entire session is mapped to the same server. Finally, single affinity must be used if
your client sessions use the Secure Sockets Layer (SSL) to connect to NLB servers.
Single affinity does not give the same load-balancing results as no affinity. Consider the
type of requests your cluster will handle before deciding on your cluster architecture.
Install and Configure NLB Clusters
NLB cluster installation is fairly straightforward. Each member server should have enough
disk space to host the application, and each should have at least two network interface cards.
You will also need to have some information on hand before you begin the installation. This
includes:
• The cluster’s Internet name: the DNS name you intend to use for the cluster
• The cluster’s virtual IP address and the appropriate subnet mask: the address that
will be linked to the DNS name
• The IP mode you intend to use: IPv4, IPv6, or both
• The current IP addresses and subnet masks for each cluster member
• The cluster casting mode you want to use: unicast or multicast. If you use multicast,
you will also want to use Internet Group Multicast Protocol (IGMP) Multicast to
reduce the number of ports used to address cluster administration traffic and
restrict it to the standard IPv4 class D range; that is, 224.0.0.0 to 239.255.255.255.
• The cluster affinity mode you want to use: single affinity, Class C, or no affinity
• Whether you want to enable remote control of the cluster using the NLB.EXE
application
C AUTION It is highly recommended not to enable this feature because it can cause a security risk.
Any user with access to the NLB.EXE application can control a cluster. It is best to use the
Network Load Balancing Manager console to administer your NLB clusters. Access to this
console can be controlled better than access to NLB.EXE.
• The unique IDs you want to assign to each cluster member
• The TCP and UDP ports for which you want NLB to handle traffic
• The load weight or handling priority you will apply to the cluster. Load weight is
used when you filter traffic to multiple cluster members. Handling priority is used
when traffic is filtered only to a single cluster member.
Now, you’re ready to set up your NLB cluster. Keep in mind that you will always need to
perform two tasks to create an NLB cluster:
• First, you need to add the NLB feature. You might even consider making this part of
the Sysprepped virtual machine you use to seed NLB cluster members.
• Then, you configure the NLB service on the cluster members.
ch01.indd 13
4/1/10 4:19:17 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
14
Microsoft Windows Server 2008: The Complete Reference
Proceed as follows:
1. Use the Server Manager | Features node to select Add Feature in the action pane.
2. Select the Network Load Balancing feature. Click Next.
3. Click Install, and then click Close once the feature is installed.
You’re ready to build the first node of the cluster.
1. Launch the Network Load Balancing Manager. Move to the Start menu, select
Administrative Tools, and click Network Load Balancing Manager.
2. This opens the NLB Manager Microsoft Management Console (MMC). To create a
new cluster, right-click Network Load Balancing Clusters in the left pane, and select
New Cluster.
3. Use the values in Table 2 to fill out this wizard.
New NLB Cluster Wizard Page
Settings
New Cluster: Connect
Enter the host name of the first member in the cluster, and
click Connect.
Select the interface to connect to.
New Cluster: Host Parameters
Select the priority for the host. You can select the default
priority number.
Select the dedicated IP address for the host.
Select the default state for the host. Set it as Started.
Select the Retain Suspended State After Computer
Restarts option. This will ensure the system does not rejoin
the cluster during maintenance operations.
New Cluster: Cluster IP
Addresses
Add the IP address(es) for the cluster. Multiple IP
addresses can be added here, but remember that a DNS
entry is required for each address you assign.
Select an IPv4 or IPv6 address. Make sure you create the
corresponding DNS entries.
New Cluster: Cluster
Parameters
Add the DNS name for the cluster.
Select Unicast with Unicast Interhost Communication,
Multicast, or IGMP Multicast.
If your network supports Multicast mode, select IGMP
Multicast. When you do so, WS08R2 gives you a warning
message. Click OK to close it.
New Cluster: Port Rules
Define port rules for the cluster and the affinity mode for
each rule.
By default, all cluster members handle all TCP and UDP
ports in single affinity mode. To modify this rule, click the
Edit button. To add new rules, click the Add button.
TABLE 2 Settings for the NLB Cluster Creation Wizard
ch01.indd 14
4/1/10 4:19:17 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
15
Now you can add cluster members. Right-click the cluster name to select Add Host To
Cluster. Type the member’s DNS name, and click Connect. Once again, use the values in
Table 2 to prepare this host. When you complete this addition, the NLB service will perform
a convergence to bring all the cluster members online.
You’re done. From now on, you can manage the cluster—adding, deleting, and
configuring members—through this console (see Figure 6). Note that this interface displays
the cluster members in the tree pane, the status of each node in the details pane, and below,
in the information pane, it lists the details of the NLB operation.
NLB clusters will be useful for load-balancing streaming media, unified communications,
Web applications, and virtual private network servers within your network.
NOTE Many organizations decide to rely on hardware load balancers for this task. They provide
exactly the same service as NLB, but they often also include WAN traffic acceleration
capabilities. Vendors such as CAI Networks (www.cainetworks.com) and F5 Networks (www
.f5.com) are examples of the vendors that offer products in this category. These devices work well
with VSOs.
FIGURE 6 The NLB Manager interface
ch01.indd 15
4/1/10 4:19:18 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
16
Microsoft Windows Server 2008: The Complete Reference
Windows Server Failover Clustering
WS08R2 failover clusters offer a similar type of availability services as NLB clusters, but rely
on a different model. Whereas in NLB clusters, server configurations do not have to be
identical, it is the purpose of the server cluster to make identical servers redundant by
allowing immediate failover of hosted applications or services. As illustrated in Figure 5,
Windows Server 2008 R2 supports clusters with up to 16 nodes.
WSFCs can include several configurations. You can design the cluster to run in a passive
manner so that each node will perform different tasks but will be ready to fail over any of
the other nodes’ services and applications. Or you can design the cluster in an active
manner so that applications operate at the same time on each of the nodes. For example,
you could design a four-node financial database cluster so that the first node managed
order entry, the second order processing, the third payment services, and the fourth the
other accounting activities. To do so, your application must be fully cluster-aware—
completely compliant with all of the WSFC features. Not all applications, or even WS08R2
services, are fully cluster-aware.
Cluster Compatibility List
Even in Microsoft’s own product offering, there are some particularities in terms of
clustering compatibility. Cluster compatibility can fall into one of four categories:
• WSFC-aware is a product or internal WS08R2 service that can take full advantage of
the cluster service. It can communicate with the cluster application programming
interface (API) to receive status and notification from the server cluster. It can react
to cluster events.
• WSFC-independent (or unaware) is a product or internal WS08R2 service that is
not aware of the presence of the cluster but can be installed on a cluster and will
behave as if it was on a single server. It responds only to the most basic cluster
events.
• WFSC-incompatible is a product or internal WS08R2 service that does not behave
well in the context of a cluster and should not be installed on a server cluster.
• NLB-compatible refers to products that are well suited to NLB clusters. NLB and
WSFC are often incompatible with each other.
Table 3 categorizes Microsoft’s Windows Server System and WS08R2 functions in terms of
cluster compatibility.
The information in Table 3 is subject to change as each of the products evolves, but it
serves as a good starting point in determining how you can configure high availability for
your services.
Server Cluster Concepts
The nodes in a server cluster can be configured in either active or passive mode. An active
node is a node that is actively rendering services. A passive node is a node that is in standby
mode, waiting to respond upon service failure. It goes without saying that like the eighth
server role presented earlier in the book, the failsafe server, the passive node is a more
expensive solution in terms of resources, because the server hardware or the virtual
ch01.indd 16
4/1/10 4:19:18 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
Product or Service
WSFCAware
WSFCIndependent
WSFCIncompatible
NLBCompatible
Comment
Active Directory
Domain Services
(ADDS)
X
Not recommended;
availability is provided
through multimaster
replication
Active Directory
Lightweight
Directory Services
(ADLDS)
X
Not recommended;
availability is provided
through ADLDS replication
BizTalk Server
X
COM +
X
X
X
Commerce Server
BizTalk state server and
message box are clusteraware. Messaging and
processing servers are
cluster-independent. All
other services should use
a network load balancing
cluster.
BizTalk can also take
advantage of a clustered
SQL Server back-end.
Component load balancing
clusters preferred
X
Component load balancing
clusters preferred
DFS
X
Stand-alone DFS
namespaces only.
Domain DFS namespaces
use redundancy provided by
ADDS.
DHCP-WINS
X
Fully compliant
Distributed
Transaction
Coordinator
X
Fully compliant
DNS
X
Exchange 2000
and later
X
File sharing
X
17
Redundancy provided by
ADDS when integrated with
the directory
X
Fully compliant.
In Exchange 2007/2010,
different server roles can
take advantage of different
modes.
Fully compliant
TABLE 3 Cluster Compatibility List
ch01.indd 17
4/1/10 4:19:19 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
18
Microsoft Windows Server 2008: The Complete Reference
Product or Service
Hyper-V
WSFCAware
WSFCIndependent
WSFCIncompatible
NLBCompatible
X
IIS
Comment
Fully compliant, but focused
on cluster shared volumes
instead of unshared disk
volumes
X
ISA Server
X
X
NLB clusters are preferred.
X
NLB clusters are preferred.
Microsoft Identity
Lifecycle Manager
X
Fully compliant
Microsoft
Message Queuing
X
Fully compliant
Office Live
Communications
Server (LCS)
Office Project
Server
X
X
X
Office SharePoint
Portal Server
LCS is incompatible with
WFSC. Use an NLB cluster
for front-end servers. Use a
WFSC for SQL Server backends.
Only the SQL Server portion
X
X
Only the SQL Server portion.
The Internet Information
Services (IIS) portion should
use NLB.
Print services
X
Fully compliant
SQL Server 2000
and later
X
Fully compliant
System Center
Configuration
Manager
X
SQL Server back-ends can
be clustered under special
conditions.
System Center
Operations
Manager
Remote Desktop
Services
X
Not supported
X
Remote Desktop Connection
Broker relies on WSFC for
protection.
Windows
Deployment
Services
X
Not supported
Windows
SharePoint
Services
X
X
Only the SQL Server portion.
The IIS portion should use
NLB.
Windows
Streaming Media
X
X
NLB clusters are preferred.
TABLE 3 Cluster Compatibility List (continued)
ch01.indd 18
4/1/10 4:19:19 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
19
machine is just waiting for failures before it becomes useful. But if your risk calculations
indicate that your critical business services require passive nodes, you should implement
them because they provide extremely high availability in certain scenarios.
Most organizations use the active-active cluster mode. In fact, the most popular
implementation of WSFC is the two-node active-active cluster. This is called a cluster pack
because the cluster nodes share data. This either can be configured to run exactly the same
services at the same time—for example, Microsoft Exchange Server running on both
nodes—or it can be configured to run different services on each node. In this configuration,
each node is configured to run the same applications and services, but half are activated on
the first node and half are activated on the other node. This way, if a service fails, the other
node can provide immediate failover because it can run the service temporarily until the
failed node can be repaired.
In active-active scenarios that run the same applications on all nodes, the applications
must be fully cluster-aware. This means that they can run multiple instances of the application
and share the same data. Many applications include their own internal capabilities for
supporting this operating mode. Applications that are not fully compliant—that are only
cluster-independent—should run in single instances; that is, on only one node.
Remember that the servers you choose to create your server cluster should be sized so that
they can take on the additional load node failures will cause. Properly sizing servers is
essential to support application failover (see Figure 7). For example, in a four-node cluster
that is designed to support up to three node failures, each node of the cluster must be able to
absorb the failure of the other node until a single node is left. This is, of course, a worst-case
scenario, but it demonstrates that in a cluster, system resources must be reserved for failures;
otherwise, the cluster will not be able to provide high availability.
You can configure your server clusters in many ways. In addition, on multiple node
clusters, you can use a mix and match of multiple-instance services or applications with
single-instance functions. If the application is mission-critical and cannot fail under any
circumstances, you can configure it as a multiple instance on some nodes and host it in
passive mode on other nodes to have the best possible availability for the application.
FIGURE 7 Node failover in a four-node cluster
ch01.indd 19
4/1/10 4:19:19 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
20
Microsoft Windows Server 2008: The Complete Reference
Finally, be careful with your failover policies. A two- to four-node cluster can easily use
random failover policies—the failed service is randomly distributed to the other available
nodes—because the possible combination of resources is relatively small. But if you have
more than four nodes in the cluster, it is a good idea to specify failover policies, because the
possible combination of resources will be too great and nodes may become overloaded
during failover. The illustration in Figure 7 is an example of a random failover policy.
NOTE Single-instance applications are best suited for two-node clusters, where one node runs the
service and the other hosts the service in standby mode. That way, if the service fails on the
running node, the second node can take it over.
Cluster Configurations
In addition, your cluster configuration will require the ability to share information about
itself between the nodes. This is performed through a quorum resource or a witness disk. By
default, there is a single quorum resource or witness disk per cluster to store configuration
information. Each node of the cluster can access the configuration information and know
the state of the cluster. The quorum or witness helps the cluster determine how it should
continue to run when a certain number of nodes fail. WS08R2 supports four different
protection modes for configuration information:
• Node Majority When clusters have an odd number of nodes, use this configuration.
For example, a cluster of one node would require a Node Majority quorum. An oddnumbered node configuration can then support the failure of half the nodes minus
one. In a five-node cluster, for example, only two nodes can fail before the cluster fails.
• Node and Disk Majority This configuration is recommended when your cluster
has an even number of nodes. It consists of a quorum disk plus node majorities. So
long as the quorum disk stays online, the cluster can fail up to half its nodes and
continue running. If the quorum disk fails as well, the cluster will behave like a
Node Majority configuration only, failing when half minus one nodes fail.
• Node and File Share Majority This configuration is normally recommended for
geographically dispersed clusters. This configuration is the same as Node and Disk
Majority, except that the quorum disk is replaced by a witness file share.
• No Majority: Disk Only This mode was the standard configuration for clusters
prior to WS08 and WS08R2. Microsoft does not recommend it anymore because the
disk provides a single point of failure since the quorum is the only witness for the
cluster operation. Note, however, that this configuration can fail down to a single
node before the cluster fails.
As you can see, there are several potential cluster configurations. Since most people create
two node clusters, the Node and Disk Majority is the recommended configuration for the
protection of cluster configuration information. In a two-disk configuration, the cluster can
run with only one node available so long as the witness disk is available.
NOTE The Node and Disk Majority configuration is the same as the No Majority: Disk Only
configuration when it comes to a two-node cluster. If the quorum disk fails, the entire cluster
fails. But it does provide better protection for the cluster as you add additional nodes.
ch01.indd 20
4/1/10 4:19:20 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
21
As mentioned earlier, WSFCs require a shared storage system. Shared storage systems
can be connected through SCSI, fibre channels, or iSCSI. SCSI systems are only supported in
two-node clustering and because of this, are quickly being deprecated. Arbitrated loop fibre
channel is also only supported for two-node clustering, but provides better scalability than
SCSI systems because it can host up to 126 devices. Note that WS08R2 does not support
clustering through parallel SCSI interfaces.
Switched fabric fibre channel and iSCSI can be used for clusters that include up to
16 nodes. Here, devices are connected in a many-to-many topology that supports the high
availability and complex configuration requirements of multiple-node server clusters. Each
service running in a multinode cluster will require access to its own protected shared
storage area. This is because while running a service, the cluster node requires exclusive
access to the storage area the service uses to persist data. When a failover is initiated, the
storage connection is released by the failing node and picked up by the failover node, and
the service continues to work. This is called a shared nothing cluster service, because a node
running a given service requires exclusive access to the storage linked to the service. This
means that when you configure your clusters, you must configure a reserved storage area
for each service you intend to run on it (see Figure 8). Each time a node is running a given
service, it is considered an active node. Each time a node has reserved space for a service
FIGURE 8 Assigning storage resources to clusters and relying on the Node and Disk majority configuration
ch01.indd 21
4/1/10 4:19:20 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
22
Microsoft Windows Server 2008: The Complete Reference
but is not running it, it is considered a passive node. Nodes can be active and yet reserve
space in passive mode for other service failovers.
As in the NLB cluster, server cluster nodes should have at least two NICs: one for
communication within the cluster and one for communication with client systems and other
network resources.
Geographically Dispersed Clusters
Windows Server 2008 R2 supports the dispersion of clusters over multiple physical sites.
This means that in addition to application or service resiliency, you can add disaster recovery
through the WSFC service. If an entire site fails for some unforeseen reason, the cluster will
continue to provide services to its client base because failover will occur in the other site or sites
that contain cluster nodes. Geographically dispersed clusters are more particular than samesite clusters to configure because of the added difficulty of maintaining cluster consistency.
In fact, if you want to create a multisite cluster, you need to ensure that your WAN
connection latency is as low as possible, though WS08R2 introduces the concept of the
Witness File Share (WFS). A WFS is a separate file share that is often located in an
independent site to provide an additional voting node during the failover process. During
this process, each of the nodes wants to gain control of a service because it thinks the other
node is not available. Through the WFS, the two nodes will be able to identify if the other
has failed or not. Without the WFS, the nodes often both try to load the service, with
sometimes catastrophic failures.
In addition, you need to configure a virtual local area network (VLAN) that regroups
the multisite nodes, including the WFS. If you can’t build low-latency WAN connections
and you can’t create a VLAN, including each site hosting a node, then you should not
design multisite clusters.
When configuring multisite clusters, you need to use a new WS08 feature: majority node
sets (MNS). Majority node sets are required because the multisite cluster cannot share data
sets like the single-site cluster, since the nodes are not located in the same physical site.
Therefore, the cluster service must be able to maintain and update cluster configuration data
on each storage unit of the cluster. This is the function of the majority node set (see Figure 9).
FIGURE 9 Single- versus multisite cluster configurations
ch01.indd 22
4/1/10 4:19:21 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
23
Cluster Shared Volumes
While the WSFC service runs in a shared nothing cluster model for most of the services it
supports, it no longer does so for host servers. In fact, with WS08R2, Microsoft introduces
its very first shared everything cluster model through Clustered Shared Volumes (CSV). CSVs
support the shared everything model because all of the cluster nodes that are attached to
the CSV share the ability to write information to the disk at the same time. This change is
necessary to support VSO live migration or the movement of a VSO from one host server
node in a cluster to another host server node without shutting down or pausing the VSO
first. This provides uninterrupted service continuity to end users. The introduction of CSVs
transforms the way in which the failover cluster service supports VSOs in several ways:
• LUN Attachment Before CSVs, you often had to dedicate a logical unit (LUN) of
storage to a single VSO because the LUN was the unit of failover. With CSVs, you
can run more than one VSO per LUN and VSO failovers will no longer affect other
virtual machines on the same LUN.
• Disk Space Savings The virtual hard disk (VHD) files that make up VSOs can
now share the same disk without impact. This reduces the disk space required for
VSOs in host server clusters.
• VHD File Paths VHD file paths are easier to track since you can now use file
paths instead of disk labels (there are only 26 potential drive letters). Paths now
appear under the \ClusterStorage folder of the system drive of a host cluster node.
• Rapid Validation Cluster validation is now faster since CSVs require fewer LUNs
than previous host server cluster configurations.
• No Custom Requirements CSVs do not require any special hardware other than
the hardware required for a Windows Server Failover Cluster.
• Increased Resiliency Cluster resiliency is increased because the cluster service can
easily re-route CSV traffic through either storage area network (SAN) or network
connectivity paths in the event of a loss of communication with the CSV store.
CSVs are a boon for host server or resource pool cluster configurations and should be the
base configuration of any host server cluster. In a CSV configuration, host server nodes are
attached to the shared volume and the CSV store can include multiple virtual machines
(see Figure 10).
Resource Pool Failover Cluster Considerations
Cluster server installation and deployment is not a simple task. It requires special hardware—
hardware that is qualified to support Windows Server 2008 R2 server clusters. For this reason,
it will be essential for you to verify with the Windows Hardware Compatibility List (www
.microsoft.com/whdc/hcl/default.mspx) that your cluster hardware is fully compatible with
WS08R2. Remember that host servers must be x64 systems. Then proceed with caution to
ensure that your clusters are properly constructed. Ask for support from your hardware
manufacturer. This will ensure that your server clusters take full advantage of both the
hardware’s and WS08R2’s high-availability and reliability features.
ch01.indd 23
4/1/10 4:19:21 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
24
Microsoft Windows Server 2008: The Complete Reference
FIGURE 10
CSV cluster
configurations
In addition, you should take the following considerations into account:
• Majority node clustering WS08R2 supports only two-site majority node clustering.
The WS08R2 majority node feature does not manage data replication for applications;
this function must be available within the application itself. For example, Exchange
Server includes the ability to run a geographic cluster and replicate data between the
sites. It is also important to note that majority node sets cannot survive for long
periods with a single node; they need to have a majority of the nodes available to
continue operating. Single quorum clusters can, on the other hand, survive with just
one node because the quorum data is stored in a single location.
• Clustering identity servers It is not recommended to cluster domain controllers
because of the nature of this service. For example, the flexible Single Master of
Operations roles cannot be failed over and may cause service outages if the hosting
node fails. In addition, it is possible for the domain controller (DC) to become so
busy that it will not respond to cluster requests. In this situation, the cluster will fail
the DC because it will think it is no longer working. Do not cluster DCs.
ch01.indd 24
4/1/10 4:19:21 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
25
• Clustering resource pools Resource pools can take advantage of both single
quorum clusters and majority node sets. But since majority node sets are complicated
to build and require highly specialized hardware as well as a third-party replication
engine, you may be better off running single site clusters to provide high availability
for the VSOs running on the host servers and using a nonclustered offline replication
strategy. Then, if a total failure occurs at one site, you can bring up the secondary
site’s VSOs through a series of procedures, or even automate it through scripts.
Everything depends on your service level agreements for total site failures. Also,
remember that you should always use Clustered Shared Volumes for resource pools.
These are not the only considerations to take into account when creating and installing
server clusters, but they provide a good reference and foundation before you begin. The
best thing to do is to determine where failover clusters will help you most. Use the details
in Tables 1 and 3 to help you make the appropriate clustering decisions.
The WS08R2 Failover Cluster Build Process
Like with the creation of NLB clusters, creating a WSFC must be performed in several steps.
This process is no longer the same for resource pools and virtual service offerings since
VSOs will not be able to take advantage of Cluster Shared Volumes; neither will any other
cluster-aware application.
Basically, node preparation for the cluster construction process uses the following process:
1. Install the OS, secure it, and finalize the default setup process. Also, join it to an
Active Directory domain.
NOTE Cluster members must belong to an ADDS domain. Otherwise, you will not be able to build
the cluster.
2. Configure the network interface cards. One card will link users to the cluster, and
another will provide heartbeat traffic for the cluster. Configure each appropriately.
You can also name them “Public” for the public NIC and “Private” for the private
NIC.
3. Next, add the WSFC feature and shut down the server. Do this for each of the
cluster nodes.
4. Prepare the shared storage units.
TIP Quorum or witness disks are often labeled as “Quorum,” and data disks are often labeled as
“SharedDisk” in two-node clusters. Correspondingly, the quorum drive is assigned letter Q:,
and shared data disks are assigned letter S:. Use your judgment to properly name quorums and
shared disks in your multinode clusters.
5. Repeat steps 1 through 4 for any other node in your cluster.
You are now ready to begin the cluster construction process.
TIP When building a cluster, you will need to assign a name and an IP address to it. The name
should follow your standard server-naming convention, and the IP address should be a public
static address in either IPv4 or IPv6 format.
ch01.indd 25
4/1/10 4:19:22 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
26
Microsoft Windows Server 2008: The Complete Reference
Build a WS08R2 Failover Cluster for Resource Pools
For resource pools, the activities required to install and create a WSFC are performed
through the command line, since host servers run Server Core only. Use the steps provided
earlier to build your WS08R2 failover cluster.
1. Begin by creating your Server Core installation. Join the domain you created for
your resource pool. Repeat for each node in the cluster.
2. Next, install the WSFC feature. Use the following command line to do so:
start /w ocsetup FailoverCluster-Core
You use the /w command to tell Start to wait until the process is complete to return
the prompt. Once the prompt is returned, you can verify that the feature is installed
by using this command:
oclist
3. Prepare the storage units in your shared storage system. You will need to prepare at
least three volumes in the storage unit. A small one, with about 1GB of space, is for
the quorum or witness disk. The other will be a massive volume for storing virtual
machine files. If you are going to host ten virtual service offerings on this unit,
allocate at least 200GB per VSO—use the server-sizing exercise to determine exactly
how much space you need. It is better to overallocate than to have to resize the
volume right away. Next, prepare a third volume for the storage of volume shadow
copies or storage snapshots. Repeat the creation of the last two volumes for each
active node you intend to have in the cluster. Connect each volume to each node in
the cluster. Ideally, you will use iSCSI targets to create these volumes.
TIP Make sure your system volumes all use drive C:. This will make it easier to access the Cluster
Shared Volumes you configure in the cluster since all CSVs appear under the \ClusterStorage
folder of the system volume of cluster nodes.
4. Return to the first node of the cluster, and make sure you are logged on with
domain administrator credentials. Configure the network interface cards.
Commands depend on whether you are using IPv4 or IPv6. Remember, one card
will use a public address for user communications, and another will use a private
address for internal cluster communications. Begin by finding the ID of each
interface and then assign appropriate addresses to them. Start with the public NIC:
netsh interface ipv4 show interfaces
netsh interface ipv4 set address name=ID source=static
address=staticIPAddress mask=SubnetMask gateway=DefaultGateway
netsh interface ipv4 add dnsserver name=ID address=FirstDNSIPAddress
index=1
netsh interface ipv4 add dnsserver name=ID address=SecondDNSIPAddress
index=2
Next, configure the private NIC. Use the ID number discovered previously:
netsh interface ipv4 set address name=ID source=static
address=staticIPAddress mask=SubnetMask
ch01.indd 26
4/1/10 4:19:22 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
27
NOTE The private NIC does not use DNS servers or a gateway. Use an address from a private IPv4
range for this NIC.
If you want to configure IPv6 addresses, use the same command. Begin by finding
out the interface IDs and then assign addresses:
netsh interface ipv6 show interfaces
netsh interface ipv6 set address interface=ID address=IPv6Address
netsh interface ipv6 set dnsserver name=ID source=static address=
FirstDNSIPAddress register=both
netsh interface ipv6 set dnsserver name=ID source=static address=
SecondDNSIPAddress register=both
NOTE Use an address from a private IPv6 range for the private NIC.
Repeat for each public interface you want to configure.
5. Return to the second node, and make sure you are logged on with domain
administrator credentials and configure the NICs.
NOTE If you have more nodes to add, repeat this operation for each node that is left.
6. Create the cluster. You can do this on any of the nodes or even on a remote computer.
Ideally, you will use a computer using the Full Installation and the Failover
Clustering Administration Console. This console provides a wizard for the creation
of the cluster, which makes it much easier to do. When you create a cluster, you need
to create it, name it, and assign its IP address. If you are using Server Core, use the
following command:
cluster /cluster:ClusterName /create /node:NodeName /ipaddr:IPAddress
where ClusterName is the name of your cluster, NodeName is the name of the
computer—usually in short notation format, for example, ServerOne—and
IPAddress is the public IP address of the cluster in either IPv4 (xxx.xxx.xxx.xxx/
yyy.yyy.yyy.yyy for the subnet) or IPv6 (xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx)
format.
If you are using the graphical interface, use the Create A Cluster command in the
action pane, and then follow the instructions in Table 4.
7. Now verify the quorum disk. If it is not right, change it. To do so, right-click the
cluster name in the tree pane and select More Actions | Configure Cluster Quorum
Settings. The best selection is Node and Disk Majority, unless you are constructing a
geographically dispersed cluster (see Figure 11). Run through the wizard, selecting
your prepared configuration disk as the witness disk. Your cluster is ready for
operation.
8. Enable Cluster Shared Volumes. Click the cluster name and select Enable Cluster
Shared Volumes from the center pane. Accept the notification that CSVs are only for
use with Hyper-V. CSVs are now enabled and a new node appears below the cluster
name in the left pane.
ch01.indd 27
4/1/10 4:19:22 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
28
Microsoft Windows Server 2008: The Complete Reference
New Cluster Wizard Page
Settings
Select Servers
Enter the host name of the first member in the cluster, and click
Add. Repeat for all other nodes you want to add.
Validation Warning
Select Yes, and then click Next.
You should run a cluster configuration validation before you can
create the cluster. Follow the prompts to run all tests and click
Finish to return to the Cluster Creation Wizard.
Access Point for Administering
the Cluster
Name the cluster.
Select the network to use for the cluster IP address and type the
address to use.
Confirmation
Confirm your changes.
Creating New Cluster
Cluster creation will configure storage, networks, and cluster nodes.
Summary
View the report once the cluster has been created.
TABLE 4 Running through the Cluster Creation Wizard
FIGURE 11 Verifying the quorum configuration
ch01.indd 28
4/1/10 4:19:23 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
High Availability Wizard Page
Settings
Select Service or Application
Choose Virtual Machine.
Select Virtual Machine
Check the virtual machines you want to make highly available.
Confirmation
Review your choices.
Configure High Availability
The wizard executes your choices.
Summary
View the report once the VM has been created.
29
TABLE 5 Running through the High Availability Wizard
9. Add storage to your CSV. Select the CSV node and click Add Storage in the action
pane. Select the available disks you want to add from the dialog box and click OK.
Your Hyper-V cluster storage is ready. You can now copy the virtual hard disks that
make up your VSOs to these disks.
10. Now you can configure services or applications to run on the cluster. In this particular
case, you want to run virtualization support. Right-click the Services And
Applications node, and select Configure A Service Or Application. In this case,
you need to choose Virtual Machine and run through the wizard steps. Use the
values presented in Table 5 to do so. Your VSOs will appear under the Services
And Applications node in the left pane once the operation is complete.
Test failover for your new clustered service. To do so, right-click the Virtual Machine resource
and select Failover. The resources should be moved to another node in the cluster. You can
also perform a live migration of your virtual machines. This moves the VSO from one node
to the other while it is in operation. Your Server Core cluster is ready.
Build a WS08R2 Failover Cluster for Virtual Service Offerings
The advantage of working with VSOs is that you can more easily rely on the graphical
interface to build a failover cluster. It is easier and less prone to error.
1. Begin by creating your VSO installation. Remember, this should be based on the full
installation of WS08R2. Join the production VSO domain. Repeat for each node in
the cluster. Next, add the WSFC feature. Use the Add Feature command in the
Server Manager | Features section. Once the operation is complete, verify that the
feature is installed in the same section of Server Manager. Repeat for each node.
2. Prepare the virtual disks that will be required for the cluster. You will need to
prepare at least three volumes. One small one, with about 1GB of space, is for the
quorum or witness disk. The other will be a larger volume for storing data files.
This should be the largest of the three disks. If you are going to run a service, such
as file shares, print shares, or databases, create a third disk to store log files or
shadow copies. It is always better to overallocate than to have to resize the volumes
right away. Repeat the creation of the last two volumes for each active node you
intend to have in the cluster. Connect each volume to each node in the cluster.
ch01.indd 29
4/1/10 4:19:23 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
30
Microsoft Windows Server 2008: The Complete Reference
3. Log on with domain administrator credentials and configure the network interface
cards. Use Control Panel to go to the Network and Sharing Center, and modify the
NICs. Remember, one card will use a public address for user communications, and
another will use a private address for internal cluster communications. These
addresses can be IPv4 or IPv6.
NOTE The private NIC does not use DNS servers or a gateway. Use an address from a private IPv4
or IPv6 range for this NIC.
Only one private NIC is required, but you can run multiple public NICs in the
cluster. If you do so, configure each one with a public address.
4. Repeat the network configuration on each node in the cluster.
5. Create the cluster. This saves you at least one boot operation. Here you need to create
the cluster, name it, and assign its IP address, as well as select the quorum disk.
a. Use the Start menu | Administrative Tools | Failover Clusters Manager Console
to do so.
b. Click Create A Cluster in the details or action pane.
c. Use the values in Table 4 above to run through this wizard.
6. Next, verify the quorum disk. If it is not right, change it. To do so, right-click the
cluster name in the tree pane, and select More Actions | Configure Cluster Quorum
Settings. The best selection is Node and Disk Majority, unless you are constructing a
geographically dispersed cluster. Run through the wizard, selecting your prepared
quorum disk as the witness disk.
7. Now you can configure services or applications to run on the cluster. All services
and applications will require a physical disk resource. Services are configured
through the Services and Applications section of the tree pane, through the
Configure A Service Or Application command. Applications such as Exchange or
SQL Server usually require an actual installation onto cluster nodes once the cluster
is configured. Use the appropriate approach to configure the service or application
you want to run on this cluster.
8. Test failover for your new clustered service or application. To do so, right-click the
service or application, and select Failover. The resources should be moved to
another node in the cluster.
Make sure you document the configuration and purpose of this cluster, as you will most
likely have several VSO clusters. Your cluster is ready.
Further Server Consolidation
With its built-in virtualization capabilities for both hosts and guests, Windows Server 2008
R2 offers some exceptional opportunities for server consolidation. This leads to fewer servers
to manage and fewer resources to secure. These servers, though, have a more complex
structure because they include more services than the single-purpose server model used in
the NT world.
ch01.indd 30
4/1/10 4:19:23 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
31
But server consolidation does not necessarily mean a more complex server structure; it
can just mean more with less. For example, Microsoft has tested two-node server clusters
that manage upwards of 3,000 printers. This means that you could greatly reduce the
number of print servers in your organization, especially in large office situations, where
network connections are high-speed and printer management can be centralized. And
especially with Windows 7 clients, where print rendering is performed locally before being
sent to the server, server workloads are even lower.
The same goes for file servers. The same WS08R2 server can manage up to 5,000 domain
DFS roots. A server cluster can manage up to 50,000 stand-alone DFS roots—another
opportunity for massive server consolidation.
Internet Information Server (IIS) also offers great consolidation opportunities because of
its architecture. Microsoft introduced Worker Process Isolation in IIS 6, meaning that any
hosted website could operate completely independently from all of the others. Though it
was available, this isolation process—a process dependent on the use of special application
pools for each service—was not very user-friendly. Now this process is automatic. When a
new site is created in IIS, a corresponding application pool of the same name is created
automatically. By default, each application pool will be isolated from all other application
pools, letting you run many more websites on the same IIS server than ever before.
This isolation allows the creation and operation of web gardens, special affinity groups
that can be assigned to specific server resources, such as central processing units (CPUs)
and memory. The web garden concept ensures that critical websites get the resources they
need even if they share server hardware. This again provides an excellent opportunity for
consolidation.
This means that you can look to further server consolidation by sharing workloads on
each of the VSOs you run. When thinking of creating a new server, first look to the possibility
of sharing that workload on an existing server. This will greatly reduce the number of servers
you need to manage in the long term.
Recovery Planning for Your Network
Even though you have done your best to ensure high availability for your servers and
services, disasters can always happen and servers can always go down. This is why it is
important to prepare for system recovery. No system is perfect, but the more protection
levels you apply to your systems, the less chance you have of losing data and experiencing
downtime. Therefore, you need to implement additional data protection strategies.
Backing up and restoring WS08R2 data is a complex process, but it has been greatly
simplified by new WS08R2 features, such as the Volume Shadow Copy. In fact, the built-in
Backup tool automatically initiates a shadow copy before taking a backup. Backups are an
important part of the recovery operation, but they are not its only component. WS08R2
offers several different recovery strategies. Some of these will be familiar to you if you’ve
worked with previous versions of Windows, but WS08R2 also includes new features that
are specific to this operating system.
Recovering systems is never an easy task. The best way to avoid having to recover
systems is by using a multilayered protection strategy. But if you do get to the stage where a
recovery operation is required, you must have a detailed strategy to follow. Like every other
operation in the VSO network, recoveries must be planned. Your recovery strategy must
ch01.indd 31
4/1/10 4:19:24 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
32
Microsoft Windows Server 2008: The Complete Reference
begin with an understanding of the operating system’s own recovery capabilities. Next, once
you’re familiar with the tools the operating system offers to help you recover systems, you
can outline or adjust your recovery strategy. Finally, you can integrate your troubleshooting
strategy with the new or updated recovery strategy.
Recovery Strategies for Windows Server 2008 R2
Recovery strategies for WS08R2 depend, of course, on the type of problem you encounter,
but they include:
• Driver rollback If you install an unstable driver on your system, you can use the
driver rollback feature to restore the previous version of a driver, so long as you can
still log into your system. This is done by viewing the device properties in the
Device Manager (Server Manager | Diagnostics | Device Manager), moving to the
Driver tab, and selecting Roll Back Driver.
• Disabling devices You can also disable devices that are not operating properly.
Once again, this is done by moving to the Device Manager, locating the device,
right-clicking it, and selecting Disable from the context menu.
• Last Known Good Configuration Just like previous versions of Windows,
WS08R2 includes a Last Known Good Configuration startup choice. This reverts to
the last configuration saved in the registry before you applied changes. You can
access this option by pressing the F8 key during system startup. This will also give
you access to a number of different startup modes: Safe Mode, Safe Mode with
Networking, and so on. These are also operation modes you can use to repair
WS08R2 installations.
• Windows Recovery Environment (WinRE) This console allows you to perform
recovery operations, such as disabling services, copying device drivers or other files
to the system, and otherwise repairing an installation. Installing the console saves
you from requiring the Windows Server 2008 R2 original installation media to
perform a repair because it is listed as an operating system in your startup choices.
• Windows PE Use Windows PE to create a bootable device that will boot into a
character-based Windows environment. This is also an excellent recovery tool,
because Windows PE will give you access to both network drives and local New
Technology File System (NTFS) drives during your repair process.
• Volume Shadow Copy Users and administrators can restore any data file that is
still available within the shadow copy store through the Previous Versions tab of the
file’s properties. Administrators can even use this feature to recover entire VSOs.
• DFS replication VSOs are especially easy to recover because they are replicated to
other locations during their operation. If one fails, you simply launch the other copy.
• Windows Server Backup (WSB) Using the default backup tool included within
Windows Server 2008 R2, you can back up and restore data to removable medium
or to spare disk drives. You can also back up entire systems to virtual hard drive
images for complete system protection.
ch01.indd 32
4/1/10 4:19:24 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
33
• Third-party backup and restore tools If you find that Windows Server Backup is
not enough, there are a number of different third-party tools you can choose from.
When selecting a third-party product, there are three key elements you must
consider: integration with the Volume Shadow Copy APIs to take advantage of this
feature, complete system recovery from bootable media, and integration with Active
Directory Domain Services.
Several of these strategies have already been covered. Recovery strategies are discussed in
the sections that follow.
System Recovery Strategies
A recovery strategy is based on the following activities:
• Service interruption is detected.
• Interruption has been categorized through a standard troubleshooting strategy.
• Risk has been evaluated and has identified the required level of response.
• The recovery plan for this level of risk is put into action.
• There is always a “Plan B” in case the main recovery plan does not work for some
reason.
• The results of recovery actions are fully tested to ensure everything is back to normal.
• Secondary recovery actions are performed; for example, broken servers that were
taken offline are repaired, or users are notified that their files are back online.
• The incident is documented and procedures are updated, if required.
It is important to detail the actual recovery plan for each type of situation. This is one reason
why risk evaluation is so important. You may not have time to document recovery processes
for every single disaster situation, but if you have taken the time to evaluate risks, you can
ensure that the most critical situations are documented. In the end, you will have multiple
recovery plans that will “plug into” your recovery strategy. All of these should be standard
operating procedures (SOPs).
In order to support your recovery plan, you’ll also need:
• An offsite copy of the plan to protect the plan itself
• Spare hardware components onsite
• Reliable and tested data backups
• Distanced, offsite storage for rotated backup media
• Available resources to perform systems recovery
In addition, you need to have either the eighth server role—the failsafe server—or a hot
site—a separate site that mirrors your production site and that can take over in the case of a
disaster. As you know, with resource pools, this secondary site is really easy to set up.
Troubleshooting Techniques
The final element of the system recovery process is a sound troubleshooting strategy. This
is the strategy your operations staff will use to identify the type of disaster they are facing.
ch01.indd 33
4/1/10 4:19:24 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
34
Microsoft Windows Server 2008: The Complete Reference
It is essential that this strategy be clear and, once again, be standard because it is critical to
the recovery process. If the issue you are facing is wrongly identified, it may cause a worse
disaster. This strategy applies to both resource pools and virtual service offerings.
In general, help requests and problem reports should be dealt with through an organized/
scientific approach that treats system errors as always being causal; that is, problems don’t
just happen—they are deviations from a norm that have distinct, identifiable causes. The
troubleshooting technician’s job is to logically deduce causes of problems based on his or
her knowledge of how the system works. The best way to do this is to use a standard
procedure.
These steps outline a standard troubleshooting procedure:
1. Document appropriate information: for example, the time, date, machine, and user
information.
2. Document all relevant information concerning the problem. Refer to baseline
system operation information if necessary.
3. Create an itemized problem description. Answer these questions:
a. Is the problem reliably reproducible or random?
b. Is it related to the time of day?
c. Is the problem user-specific?
d. Is it platform-specific?
e. Is it version-specific?
f. Is it related to hard disk free space?
g. Is it network traffic-related?
h. What is it not?
4. Research similar occurrences in your internal troubleshooting databases. Review
the Windows Server 2008 R2 Help system, if it is available. Also, review external
troubleshooting databases such as Microsoft TechNet (http://technet.microsoft
.com) and the Microsoft Knowledge Base (http://support.microsoft.com). It is also
a good idea to draw on the expertise of your coworkers.
5. Create a reasonable hypothesis based on all of the available information.
6. Test the hypothesis and document the results.
7. If the test successfully cures the problem, document and close the case. If unsuccessful,
modify the hypothesis or, if necessary, create a new hypothesis. Repeat the hypothesizethen-test cycle until the issue is resolved.
Note that complex problems (more than one cause-effect relationship) may require several
iterations of steps 2 through 7.
Categorize Issues for Resource Pools and VSOs
One of the important aspects of troubleshooting is problem classification. It is often helpful
to categorize errors according to the circumstances surrounding the occurrence. Table 6
includes a nonexhaustive list of problem classes.
ch01.indd 34
4/1/10 4:19:24 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
Problem Classes
35
Key Characteristics
Resource Pools Only
Peripherals
Keyboard, video display, hardware components, drivers
Network
Adapter configuration, traffic, cabling, transmission devices
Resource Pools and Virtual Service Offerings
Installation
Procedure, media, hardware/software requirements, network errors
Bootstrap
Missing files, hardware failures, boot menu
Security
File encryption, access rights, permissions
Service Startup
Dependent services, configuration
Application
Application-specific errors
Virtual Service Offerings Only
Logon
User accounts, validating server, registry configuration, network
access
User Configuration
Redirected folders, user profiles, group memberships
Procedural
User education, control
TABLE 6 Sample Problem Classifications
As you can see, your troubleshooting procedure is not only used in disasters. It can be
used in all troubleshooting situations. But for disasters, the key to the troubleshooting and
recovery strategy is the quality of your backups. This is why the backup strategy is one of
the most important elements of your system resiliency design.
Data Protection Strategies for Resource Pools
Backing up host servers means backing up three different types of objects:
• Operating system The partition that makes up drive C: and runs the host server
• Data partitions The data drive that contains the virtual service offerings
• Virtual machine contents The contents of the virtual service offerings must also
be backed up. This is discussed more in the next section.
Host servers are the simplest kind of server because they only run one major role:
virtualization. If you set up your infrastructure right, backing these machines up will be
relatively easy. As discussed previously, the ideal infrastructure for host servers is that of
a blade server connected to shared storage. Ideally, each and every drive that makes up
the server will be hosted within the shared storage infrastructure. This provides several
levels of defense against data or system loss:
• Each partition can either rely on the Volume Shadow Copy Service or the internal
snapshot tool provided with the storage unit to provide a first line of defense.
• The second line of defense is provided by the volume shadow copy of the virtual
machines located on the data drive.
ch01.indd 35
4/1/10 4:19:25 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
36
Microsoft Windows Server 2008: The Complete Reference
• A third line of defense is provided through replication of the files that make up each
of the VSOs.
• A fourth line of defense is provided through failover clustering.
• The last line of defense is provided through backups of the disks that make up each
host system.
C AUTION You will need to add another disk partition to each host server in order to perform
backups through Windows Server Backup. WSB does not support backup to tape.
TIP Our recommendation: Obtain a third-party backup tool, such as Acronis Backup & Recovery 10,
since you will want comprehensive backup support for the host servers. They offer a much more
complete set of features than Windows Server Backup.
Set up your schedules to protect systems on an ongoing basis. Though you can’t do this
with WSB, you should perform full backups once a week and then follow with differential
backups every day, if your product supports them. Differential backups take up more space
than incremental backups, but they are easier to recover from because they include all of the
changes since the full backup. In a recovery situation, you only need the most recent full
backup and the most recent differential to restore the system.
If you do decide to perform host system backups from WSB, use the following command
line. It performs a full system backup once a day at 6:00 P.M. to a disk partition:
wbadmin enable backup addtarget:DiskID schedule:18:00 user:Username
password:Password
where DiskID is the ID number of the disk to back up to; use the DISKPART command to
identify the disk ID. Username and Password should belong to a service account with local
administration rights on the host server.
NOTE Destination drives should be reserved exclusively for backup purposes because all other data
will be erased.
System State Restores
In Windows Server 2008 R2, you can now perform a system state backup as well as perform
a system state restore, repairing a broken server. There are nine potential elements to a
system state restore. Some are always backed up, and others depend on the type of server
you are backing up. They are identified as follows:
• The system registry
• The COM+ class registry database
• Boot and system files
• Windows file protection system files
• Active Directory database (on domain controllers)
• SYSVOL directory (on DCs as well)
ch01.indd 36
4/1/10 4:19:25 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
37
• Certificate services database (on certificate servers)
• Cluster service configuration information (on failover clusters)
• IIS metadirectory (on web application servers)
System state data is always restored as a whole; it cannot be segregated. To perform a system
state recovery from the command line, rely on the following command:
wbadmin start sysstaterecovery
Typing the command will provide you with available options. Most often, you will want
to map a network drive previous to the restore in order to link to a working backup.
Data Protection Strategies for Virtual Service Offerings
Backing up your virtual service offerings will mean backing up several types of information:
user data, corporate data, databases, documents, system state information for your servers,
and Active Directory data. As mentioned earlier, you can use either Windows Server Backup
or a third-party backup tool, like Acronis Backup & Recovery 10, to perform these backups.
Whichever one you use, make sure that you will use a standard backup strategy, creating
backup sets of specific data types—for example, creating only user data backups in one
backup set and only system data in another. This will simplify the restoration process.
Data backups are rather straightforward: Select the data drive and back it up. Remember
that WS08R2 will automatically create a shadow copy before backing up the data. In fact, the
backup set is created from shadow copy data. This avoids issues with open files. The shadow
copy service also has special APIs that enable it to work with databases such as SQL Server
and Exchange Server, making the snapshots valid even for databases.
Basically, you should back up data and operating systems on a daily basis. Perform a
full backup once a week and then rely on differentials.
NOTE Windows Server Backup is not compatible with NTBackup, the backup tool from previous
versions of Windows. If you have data stored in an NTBackup and you want to restore it to a
WS08R2 server, download NTBackup from Microsoft at http://go.microsoft.com/fwlink/
?LinkId=82917.
You need to support your backup strategy with both a remote storage solution and
offsite media storage. Remember that you will need a third-party backup tool if you want to
back up to tape. You will need to ensure that you have a safe offline storage space for media.
You should rotate offsite media on a regular basis. For example, every second complete
backup should be stored offsite in a controlled environment.
A common schedule relies on a four-week retention strategy. This means that you retain
backup media for a period of four weeks. If you keep every second copy offsite, then you
are always only a week away from complete disaster. In addition, your archiving schedule
will outline which copies you should keep offsite on a permanent basis.
Select a Third-Party Backup Tool
One of the most important aspects of the selection of a third-party backup tool is its
awareness of the components that make a server operate. Many backup tools, especially
ch01.indd 37
4/1/10 4:19:25 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
38
Microsoft Windows Server 2008: The Complete Reference
backup tools that are designed to back up Windows data and store it on central, mainframe
servers, are “dump” backup tools; all they do is copy a file from the Windows server to a
central location. When you choose a backup tool, make sure it is Windows Server–aware.
A number of third-party backup solutions on the market are specifically designed for
Windows Server 2008 R2, including Acronis Backup & Recovery 10. They all meet specific
criteria, which must include:
• Being aware of system state data
• Being integrated with the Volume Shadow Copy Service, triggering a shadow copy
before launching a backup operation
• Enabling complete system recovery from a simple process
• Being Active Directory–aware
Meeting these four basic criteria is essential. There are other criteria, of course, such as
integrating with massive storage products that are supported by Windows, including
special drivers for SQL Server and Exchange, and so on, but the four listed here are the
core requirements for an intelligent, third-party backup solution.
Authoritative Active Directory Domain Services Restores
One of the most significant issues with WSB and WS08R2 in general in terms of backup, and
especially restoration, is Active Directory Domain Services. ADDS is a complex database.
Often, the best way to restore a downed domain controller is to rebuild the DC to a certain
level and then let multimaster replication take over to bring the server up to date. The
impact of this recovery strategy is that it taxes the network, especially if the DC is a regional
server. It all depends on the level to which you rebuild the server and the obsolescence of
the data it contains.
Fortunately, WS08R2 lets you stage DCs with offline media. This means that you can
create an ADDS database backup on removable media and use it to stage or restore DCs.
The more recent the media, the less replication is required. Recoveries of this type are not
too complex. These recoveries assume that the data within the other replicas of the directory
database is authoritative—it is valid data. It also means that there was no critical and
unreplicated data within the downed DC.
Issues arise when there is critical data within a downed DC, data that is not within the
other replicas, or when an error occurs and data within the directory is damaged and must
be restored. In this case, you must perform an authoritative restore. This is where you begin
to find the limitations of WSB.
Active Directory manages directory replication through the update sequence number
(USN). USNs can be thought of as change counters and represent the number of modifications
on a domain controller since the last replication. Values for objects and properties that have
the highest USN are replicated to other domain controllers and replace the values that are in
the copies of the directory database located on the target DCs. USNs are also used to manage
replication conflicts. If two domain controllers have the same USN, a timestamp is used to
determine the latest change. When you perform a normal ADDS restore, data that is restored
from backup is updated according to the information in other domain controllers; in fact, it is
overwritten if the USN for the data in other DCs is higher than the USN for the data in the
restored DC.
ch01.indd 38
4/1/10 4:19:26 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Chapter 1
Build for Business Continuity
39
When you need to restore data from a crashed DC that included critical data—data that
is not found in the current version of the directory (for example, someone deleted an entire
OU and it has been replicated to all DCs), you need to perform an authoritative restore. In
this restore, the information you will recover from backup will take precedence over the
information in the directory, even if the USNs are of a lower value.
To perform an authoritative restore, you must begin with a normal restore. Then, once
the data is restored and the domain controller is still offline, you use the NTDSUTIL tool to
make the restore authoritative. The authoritative restore can include all or just a portion of
the restored Active Directory data.
As you can see, restoring information that can be deleted from a simple operator error
can be quite complex. This is one of the key reasons why you would consider using a
comprehensive backup technology, a technology that is specifically designed to integrate
and support all of Windows Server 2008 R2’s features.
Finalize Your Resiliency Strategy
Choosing the right data protection technology is a core element of your resiliency strategy,
but as you have seen here, it is not the only element. You need to design and implement the
proper processes and ensure they are followed. This is an excellent opportunity for the
design of standard operating procedures.
In addition, you must ensure that your data protection strategies complement your
system redundancy strategies. One of the key elements of the former is integrated and
regular testing: Your backup tapes or other media must be tested on a regular basis. Too
many organizations have performed backups for extended periods and made the fatal
mistake of never testing them or testing them only at the implementation phase of the
solution and foregoing tests after that. Don’t make this mistake!
Resiliency is at the core of any network. It is also the final preparation stage of the
parallel network. Now your network is ready to provide complete services to your
organization. Two key elements have yet to be covered before the parallel network is fully
operational:
• The migration of both users and data into the parallel network, as well as the
decommission of the legacy network
• The modification of operational roles within your IT organization to cover new and
sometimes integrated administrative activities for the new network
Both elements are covered in the next chapter of the Complete Reference book.
ch01.indd 39
4/1/10 4:19:26 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
APPENDIX
Acronis and Virtualization
H
urricanes. Earthquakes. Floods. Power outages. Corrupted software installations.
Computer viruses. There is no way that you can avoid the inevitable natural and
man-made disasters that unexpectedly occur daily across the country—you can’t
cheat fate—but that’s no reason not to be prepared to survive, or even prosper, despite these
events.
In a corporate environment, worker productivity is at an all time high. Ensuring that
end users work—digitized data, custom-built software applications, system configurations,
and all other computing resources—survives the next unexpected system failure is the only
way to guarantee that your company doesn’t fall victim to its own lack of preparation. The
key to surviving any corporate disaster is being prepared with a disaster recovery plan that
will allow your IT organization to rapidly restore your remote and local computers to a
known, working state in the shortest possible time—minutes rather than hours or days.
Disasters have different meanings to different companies. If a Web server fails for a small
company that does primarily retail, walk-in business, it might not constitute a disaster.
However, if that same company does all of its sales over the Internet, a failed Web server
could indeed constitute a disaster directly impacting their revenue-generating activities.
Surviving a disaster requires a lot more planning than simply making weekly, or even
daily, backups. A backup of your hard disk will do you little good if you cannot restore the
system quickly. File-based backups, while helpful for some uses, are inadequate if you need
to restore a server or workstation completely.
Some vendors offer software that will image open systems files, but cannot restore those
images to dissimilar hardware. This alone can defeat your disaster recovery process if your
backup cannot be restored. Remember, you don’t necessarily know to what hardware you’ll
be restoring your image. It might be a server you already own, or it might be a box at a
remote disaster recovery center, where you won’t even know the hardware configuration.
Still other vendors claim you can restore to different hardware, but first you need a
network manager to completely reconfigure the network, all user settings and configurations,
plus additional system network settings. And, in some cases, you’ll need to already have
specific software on your system before the failure. While you might actually restore to
different hardware, providing you have the appropriate device drivers, this downtime again
defeats the purpose of a disaster recovery strategy. Remember, the system to which you
restore your image might not be local. If the disaster recovery server is in Colorado and your
office is in New York, you might not be able to dispatch a technician to do the reconfiguration.
At that point, your restoration simply would not work.
40
Appendix.indd 40
4/1/10 4:22:07 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
41
Here are some recommendations on how you can recover from a disaster with the least
possible disruption to your network:
1. Develop a disaster recovery plan and test it frequently. Even if you have a plan, if
your staff cannot put it into operation or doesn’t understand how it works, then
what you have is essentially nothing effective. Make sure your staff can execute
your disaster recovery plan.
2. Back up your systems with a disk-imaging program that allows you to restore the
image to dissimilar hardware. Make sure the software is hardware-neutral. You
should be able to add new hardware drivers during the restore process. You also
should be able to take an image you already have and restore it to any hardware—
this makes your existing backups fully transportable.
3. Make sure your imaging software does not delete critical configuration information,
such as the Security ID Number (SID), network configurations, user configurations,
or other critical data that would require a network engineer to reconfigure.
4. Restoring an image on a remote server should not require dispatching a technician.
Make sure you can boot a remote server, even if its operating system has failed. The
ideal scenario should allow you to boot directly from the server’s image. Remember,
you’ll need your top engineers to repair the problem; restoring a server should fall to
IT technicians or even nontechnical staff.
5. Today’s corporate enterprises recognize that the future is in 64-bit hardware and
software. Make sure your disk-imaging software supports both the emerging and
legacy technologies.
6. Corporate networks often are made up of multiple network domains. Make sure
your imaging software can create and store images across domains.
7. Support for multiple databases is very important. At a minimum, your imaging
software should support Microsoft Volume Shadow Copy Service (VSS). For
compatible databases, this will allow you to suspend the database during the
creation of an image. It is also useful to have agents for other databases. You also
might consider purpose-built backup software for databases that works with your
existing disaster recovery software.
8. Console-based management is critical. Your management console must be able to
manage both servers and workstations across physical and virtual environments.
It should allow you to manage computer groups, backup policies, scheduling,
notification, and other policy issues. And, you should be able to run your console
from any system—laptop, desktop, or server.
9. Your imaging software should be able to store an image not only across the network,
but also on the same physical disk being imaged. This can be useful for the quick
restoration of a file or folder; it is not recommended as the only location for storing
an image, as it would be ineffective if the disk drive itself failed. However, for a
software failure or accidental deletion, this hidden partition could provide the fastest
recovery of a file or folder.
Appendix.indd 41
4/1/10 4:22:07 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
42
Microsoft Windows Server 2008: The Complete Reference
10. It is important that all IT employees, from entry-level technicians to managers, be
familiar and comfortable with creating and restoring images. If only your most
highly trained engineers can conduct backups, you might well be misallocating
your personnel resources. During an emergency, you want your most skilled
engineers solving the problem, not restoring the server.
These tips will help your IT department function more cohesively. But there’s more to
disaster recovery than simply backup. The key to effective disaster recovery is designing
your infrastructure to be poised for backup and disaster recovery. Rather than building
islands of automation that have disparate disaster recovery strategies, today you can design
a network that builds in security without breaking the corporate budget. One approach
that’s becoming more popular today is virtualization.
The Virtues of Virtualization
Virtualization requires a new way of looking at technology and how it can work for you.
There are four kinds of virtualization: server or desktop, storage, software, and network.
For the purposes of this chapter, we will look at server and storage virtualization, along
with migration issues.
There is nothing particularly new about virtualization; you’ve been doing it for years
without really thinking about it. Nearly every IT manager—and consumer—is familiar with
the most basic form of virtualization: partitions on a hard disk. When you create a partition
and assign it a drive letter, Windows sees that partition as a virtual hard disk. As far as
Windows is concerned, it is a separate, physical device. Only the software knows that this
drive is really just part of a single physical drive that goes by multiple names.
Now consider: What if that virtual hard disk had its own operating system and
applications? In fact, what if you had multiple partitions and each one of them had its own
operating system and applications? You could effectively run several computers with one
CPU, motherboard, video card, network card, and disk drive. That is the underlying logic
and business case for hardware virtualization.
In addition, many IT professionals are open to considering new approaches to backup
and recovery of virtual machines to get a better ROI and lower total cost of ownership. This
is why many IT managers are re-evaluating best practices to safeguard and secure digital
assets to ensure they meet business continuity goals along with their move to virtualization.
Why Virtualize?
There are many reasons why an IT department might move from a traditional, physical,
server-based infrastructure to one based on virtual technology. First and foremost is, of
course, money. Whether it’s the cost of cooling thousands of physical servers, powering the
boxes, or simply finding the real estate for them, a purely physical server infrastructure can
be expensive. It all comes back to doing more with less.
Virtual servers are an excellent choice for consolidating servers and reducing indirect
computer-specific IT costs. Prototyping and software deployment also are improved
dramatically using virtual technology. The challenge, however, is understanding how and
why the dichotomy of less is more—or physical to virtual (P2V)—works so well in computer
processing.
Appendix.indd 42
4/1/10 4:22:08 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
43
Another reason to move to virtual servers is systems management. Keeping track of
thousands of servers can be a management nightmare; managing just a few to a few dozen
can turn that nightmare into a viable operation for a scaled-down IT department. The same
applies to system protection. While small shops will continue to manage backup and recovery
through the installation of backup clients within each machine—virtual or physical—this
traditional protection method will quickly become both expensive and difficult to manage in
shops that run more than a dozen virtual machines.
Once you’ve made the move to the virtual world, your infrastructure never looks the
same again. Now you not only can move processing services from physical to virtual
machines, but you also can move them from virtual to other virtual (V2V) machines.
Finally, the operation comes full circle when you move processes from virtual to physical
machines (V2P) or physical to physical machines (P2P). What, you ask? Why step backwards
and move to a physical machine when virtual systems are fast and easy to use? We’ll see in
a moment.
Physical to Virtual
Virtualization is used for prototyping and development, as well as for production systems.
Prototyping is often required when companies are developing new applications and testing
them on various hardware configurations. Unlike physical servers, which might require a
full operating system reinstallation and reconfiguration if software crashes, virtual servers
can be re-created in minutes. As a result, this approach is a very time- and cost-effective for
software development and systems configuration.
An important benefit to virtualizing hardware on production servers is the ability to do
more with less—create more computers without buying new hardware, run more servers
without increasing your capital or real estate investment to house the additional boxes, and
increase processing power without spending money on more electricity or air conditioning.
In fact, you might well spend less.
The beauty of this approach is that with multiple servers running various applications
simultaneously, you get all of the benefits of a server farm without the management headache.
The challenge, however, is making sure you have enough horsepower to run all of these
disparate servers, as well as having the appropriate hardware for each.
Virtual to Virtual
As noted earlier, software prototyping is a popular use for virtual systems. Often, these
prototyped systems will then be deployed to yet other virtual servers. When this happens,
you need a simple way to ensure that these servers will run on the new hardware.
Ideally, your deployment tool will be able to recognize the various virtual environments,
be they from Microsoft, VMware, Citrix, or similar products. Then, when the deployment
is done, the tool will make any necessary modifications automatically and without user
interaction.
Another way to accomplish this is to use a deployment tool that permits you to restore
the image of the server to dissimilar hardware. Such a universal restore tool needs to offer
the ability for IT managers to restore their own device drivers. Make sure that when you are
buying such software, you get an application that is integrated with your disk-imaging
software. That’s the best way to ensure that you can deploy images to any virtual server
from any virtual server.
Appendix.indd 43
4/1/10 4:22:08 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
44
Microsoft Windows Server 2008: The Complete Reference
Virtual to Physical
In cases where software is unable to run on a virtual machine, you need a mechanism to
move back to a physical environment. While virtualization vendors are happy to provide
tools for moving from the physical to the virtual environment, they tend not to offer tools to
move from the virtual back to the physical world. Indeed, few of the virtual operating
system providers offer tools to move from one virtual operating system to another yet,
though some offer the possibility of converting a virtual machine from another format to
theirs. However, moving from a virtual to a physical server is an all-too-real possibility, and
it is critical that you have tools in place before you begin your deployment so that if you find
you need to move back to a physical server, you can do so swiftly and painlessly. Similarly,
you might well need a tool to move from one virtual environment to another.
Using Storage Virtualization
IT’s responsiveness to server virtualization is driving organizations to re-architect their
storage systems to gain greater portability and overall optimization of resources. Industry
analysts site data growing 50 percent to 100 percent each year adding to organizations
growing storage requirements. An effective way to manage this challenge is to integrate
data deduplication into the backup process to significantly reduce storage volumes by as
much as 90 percent. Through the effective use of storage virtualization, an IT manager of
any business operation, be it a regional credit union or a multinational bank, can make
backup files portable across physical and virtual environments, reducing the time and cost
involved in protecting data. A company can realize cost savings, both through lower
equipment costs and fewer staff hours for storage tasks, extending the benefits across all
aspects of datacenter operations.
As noted, data and applications traditionally were tied closely to the hardware platform,
requiring extra equipment that served no purpose aside from waiting for infrequent use.
Storage virtualization shrinks the necessary server footprint for each system in the datacenter,
requiring less staff backup and less planning coordination as well.
Traditionally, managing backup and recovery involved physical-to-physical transactions.
You had to back up from and restore to similar physical environments—such as one server of
a particular model number to an identical server. Equipment was central to all backup and
recovery operations, with compatibility being the fundamental barrier to rapid task completion.
The next step in the backup evolution was virtual servers, which offered improved
success, primarily because of the availability of device drivers. Ensuring that a recovery
target had the correct drivers for the video card, network card, Small Computer System
Interface (SCSI) host controller, or other connected device required considerable manual
effort and resource coordination to deliver real process advantages. As a result, backing up
to virtual servers was possible, but often not practical.
The latest innovation in backup and recovery management is full storage and systems
virtualization, offering complete portability of backup files. Disk snapshots can be moved
with relative freedom among physical and virtual environments with the right virtualization
migration tools. This portability affords more flexibility and provides a vital key in a crowded
datacenter, and makes almost any available destination—physical or virtual—a viable backup
or recovery target.
Remember, these applications will be sharing the processing power of the server’s CPU,
so if you have one application on a virtual server that is using all of the CPU’s power, all of
Appendix.indd 44
4/1/10 4:22:08 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
45
the virtual servers on that machine will suffer reduced performance. One way to deal with
an application that is becoming too much of a CPU consumer is to move it back to its own
physical computer, where it will have dedicated hardware. Another is to move it to its own
host server. Once that program is moved off the virtual server, all of the virtual servers that
reside on that physical server will see improved performance.
A key to successful virtualization migration is system state. You can think of system state
as what a computer is like at a specific point in time, complete with all applications running,
Windows running, and the user doing whatever he or she is doing. If you can preserve the
system state when you take a snapshot of the hard disk, you have an exact duplicate of that
computer.
Let’s say you restore that image; the computer will be in exactly the same condition as
when you created the image—the same programs will be running, the same screen will be
displayed, and everything on that computer goes back to the very point in time when it was
in a known, good, working condition. Without restoring the system state, you effectively
have just a file-based backup that can restore files but does very little to restore the system
to a working condition.
Effective backup and recovery virtualization tools create full disk images in real time,
even while Windows files are open and in use, opening backup windows, and making full
backups possible at any time. As backup and recovery virtualization tools have matured,
disk-imaging software that preserves a complete system makes faster and more thorough
backup files that facilitate more accurate data and system recovery.
Cost savings thus result from:
• Faster backups
• Storage flexibility
• Rapid recovery
• Transportable images
• Smaller equipment footprints
• Ease of use
Faster Backups Virtualized backup and restore capabilities simplify both process-driven and
ad hoc data backups through easier equipment provisioning, less need for administrator
coordination, and faster execution. Backup file compression makes backups even faster, with
a smaller file footprint and less network strain.
Rapid Recovery Data recovery is where speed matters most, as it implies that a system is
unavailable. Storage media flexibility streamlines the recovery process, and full disk image
snapshots allow you to recover to nearly any available server. The compressed backup file
accelerates recovery through network transmission speed and the time needed to write the
file to the recovery target.
Storage Flexibility
More efficient backup and recovery operations result because data can be
stored to or recovered from a broader variety of physical and virtual servers. Administrators
spend less time preparing a specific server or environment, as more targets are available.
Make sure to use tools with a library of device drivers built into the software, as these tools
will mitigate the need for manual intervention, and full disk imaging during the backup
process makes “bare-metal recovery” possible.
Appendix.indd 45
4/1/10 4:22:08 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
46
Microsoft Windows Server 2008: The Complete Reference
Transportable Images
We talked about storage flexibility; transportable images give you
target flexibility. A transportable image, such as those from disaster recovery software
companies like Acronis Inc., can be restored to any hardware, regardless of the hardware’s
motherboard, CPU manufacturer, or system design; the only limitation is the system must
be x86-based and running a version of Microsoft Windows. By eliminating the need for
storing and maintaining expensive identical hardware that is seldom, if ever, used, a
company can instead spend its resources where they’re needed most, be it for additional
hardware, staffing, or non-IT expenditures.
Smaller Equipment Footprints Faster and smaller backup files do not tie up your networks or
require extra servers, and flexibility with storage media makes it unnecessary to carry extra
equipment to serve as a backup or recovery target. Easier provisioning leads to a smaller
platform footprint since you need less hardware to run the same number of machines.
Lower equipment expenditures yield ongoing savings through lower maintenance costs on
servers and software licenses.
Ease of Use Fewer errors and less administrator intervention help you do it right the first
time. Backups become more accurate and recoveries faster. Further, costly initial training and
ongoing refresher sessions are unnecessary. Emerging backup and recovery virtualization
tools use intuitive interfaces to help administrators learn the software quickly, making the
software useful more quickly and accelerating the realization of cost savings.
These six dimensions result in a lower total cost of ownership for all backup and recovery
operations, driving cost savings for every application and platform. Backup and recovery
virtualization is both inexpensive up front and more effective on an ongoing basis. Regardless
of how a company goes about implementing its virtualization strategy, remember that recovery
and being able to deploy an image to a new system is everything; having a backup—be it filebased or image-based—that cannot be restored to a new system is worthless.
Virtualization in an SMB Environment
Until recently, virtualization was utilized primarily in server consolidation projects at the
enterprise level. But times are changing and, increasingly, virtualization is being recognized
as a technology that can offer significant benefits to organizations of all sizes.
Virtualization is being widely embraced by the IT industry, and smaller organizations
are now looking to make use of the technology. This is resulting in an increased number of
affordable product offerings as vendors compete to capture a share of the emerging small to
midsize business (SMB) market. This significantly reduces a major obstacle to deploying
virtualization at the SMB level: cost.
SMBs are now perfectly positioned to reap the maximum benefits from virtualization.
In enterprise-level organizations, a complex infrastructure can challenge even the most
resourceful IT manager looking to employ virtualization. A successful implementation that
migrates some servers from physical to virtual can make management of a datacenter a less
onerous task.
SMBs, with smaller and less complicated infrastructures, are much less likely to
encounter impediments to migrating a much greater proportion—or even all—of their
physical infrastructure to a virtual environment.
But what are the benefits of migrating to a virtual infrastructure in an SMB environment?
Let’s take a look at how virtualization might work in a traditional small or midsized business.
Appendix.indd 46
4/1/10 4:22:09 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
47
Separating the hardware from the software running in a virtual environment provides
numerous benefits. For example, as shown in Figure A-1, many physical servers can be
consolidated into fewer physical servers. In this example, each physical box has a hypervisor
running as the host operating system, while each virtual server can be a different operating
system, such as Windows Server, Windows 7, or Linux. These are called guest operating
systems. Fewer physical servers result in a reduction in hardware costs, a reduction in server
sprawl, and, accordingly, a reduction in the amount of bought or leased floor space required
to accommodate servers. Consolidation of servers also results in reduced electricity
consumption and, accordingly, reduced utility bills, leading to a reduction in the total cost of
ownership (TCO) of each server, further reducing growing IT costs.
Since you will have fewer physical servers, management is simplified as backup and
disaster recovery plans become easier to create, manage, and maintain. Management is
easier, since each virtual machine environment—the guest operating system and everything
that runs on that virtual machine—is stored as a set of files on the virtual machine. This
virtual machine can be moved easily and quickly to a different physical server, enabling
zero-downtime upgrades and maintenance.
Virtual machines run in complete isolation from both the host machine and other virtual
machines, resulting in complete fault and error containment. Even if one virtual machine
crashes, it will have no impact on other virtual machines running on the same physical server.
FIGURE A-1 Multiple virtual servers can reside on the same physical server, reducing the number of physical
servers in your datacenter.
Appendix.indd 47
4/1/10 4:22:09 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
48
Microsoft Windows Server 2008: The Complete Reference
These examples are far from an exhaustive list of benefits, and many organizations will
find that virtualization offers numerous other benefits that will help to increase agility,
reduce costs, and reduce management complexity of the data contained within their
computer systems. It does, however, demonstrate just how cost-effective and efficient a
virtual environment can be in an SMB.
Planning for Virtualization
In order for any virtualization project to be successful and deliver the best possible return on
investment (ROI), extensive premigration planning is essential. An understanding of both
virtualization concepts and the potential pitfalls is also critical to success and, accordingly,
thorough research should be undertaken prior to the commencement of the project.
Establishing Goals and Objectives
Creating a clearly defined set of goals and objectives helps ensure that solutions fully meet
with an organization’s business continuity requirements and deliver the maximum possible
ROI. Establishing goals and objectives must, therefore, be the first step in any virtualization
project. Goals and objectives need to document as clearly as possible the expected results of
virtualization and should provide an answer to questions such as:
• What are the reasons for virtualization and/or server consolidation?
• How many physical servers could potentially be consolidated?
• Which solutions could be leveraged and which is the preferred solution?
• How will virtualization affect users?
• What are the security implications of migrating to a virtual infrastructure versus
maintaining the status quo?
• How much downtime would a migration require?
• What would the risks be and how could those risks be mitigated?
• What is the estimated cost of the project?
• What is the estimated ROI?
• What is your backup and disaster recovery strategy to cost-effectively protect
backup and disaster recovery for both physical and virtual systems?
Goals and objectives are not set in stone and, invariably, there will be modification and
refinement during the later stages of the planning process as the plan is put into place.
The objectives associated with each goal should be underpinned with measurable and
specific metrics, which will make clear both the reason for the goal and its expected results.
For example:
Goal
Objective
Maximize hardware utilization
Achieve an X:Y consolidation ratio for servers.
Achieve Z% hardware utilization for servers.
Reduce server sprawl
X% utilization for application servers will result in a Y%
reduction in the purchasing rate for new servers.
Review procedures for purchasing and deploying servers.
Appendix.indd 48
4/1/10 4:22:09 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
49
A common mistake many organizations make when planning virtualization is to focus
almost exclusively on hardware consolidation and, in doing so, miss out on the opportunity
to rationalize and consolidate other aspects of the business. For example, combining a
number of distributed systems into a single virtual infrastructure provides not only an
opportunity to consolidate the hardware, it also provides an opportunity to consolidate and
rationalize the job functions of support staff.
Similarly, business processes, such as purchasing and deployment decisions, need to be
reviewed. In order to derive the maximum benefit from virtualization and to obtain the best
possible ROI, an organization should consider not only what hardware can be combined,
but also ways to enhance efficiencies across a variety of resources including IT personnel
and business processes.
Creating a set of clearly defined goals and objectives will steer the virtualization project
in the right direction and ensure the deployment of a solution that is flexible, secure, costefficient, and fully meets both the present and future needs of the organization.
Analyzing the Current Infrastructure and Identifying Consolidation Candidates
In an enterprise-level organization, the process of identifying all the servers in the
infrastructure can prove challenging. In SMBs, inventorying is a much less daunting job.
However, once the infrastructure has been mapped, it’s still necessary to determine which
servers are candidates for virtualization, and this is no easy task.
Not everything that can be virtualized should be virtualized. Running high-volume or
high-performance applications in a virtual environment can result in I/O contention among
virtual machines, causing bottlenecks and nonoptimal performance. Similarly, a server
running an application with highly variable performance might not make a good candidate
for consolidation.
In some instances, nonoptimal performance might be considered a worthwhile tradeoff
in order to be able to consolidate, but, in other instances, any performance loss would be
unacceptable. The effects of consolidating each server must be considered individually. And
remember, consolidated servers should be reevaluated on a regular basis to ensure that
you’re getting the most performance from your infrastructure.
Remember that consolidation is not a one-way task. Servers that you consolidate today
can be migrated back to physical servers later should the demands on the virtual servers
change. Preparing for the future and recognizing that virtual-to-physical migrations are
possible is an important aspect of capacity planning.
Fortunately, a number of products are available that can help make the process of
identifying candidates for virtualization significantly more straightforward. One popular
offering from a major supplier of virtual operating systems is a combination of a local
application and a hosted service. In fact, it delivers far more information than many
organizations would actually need, but its pricing could put it out of reach of many SMBs.
Products from other vendors, while less fully featured, are nonetheless able to provide
an organization with all the information it needs to plan a virtual infrastructure and, given
their more modest pricing structures, are likely to be the preferred option for many SMBs.
These products can be used to gather detailed information, about all the servers in the
network, including utilization statistics, whether in a single location or multiple
geographically dispersed locations. That information can then be used to make informed
choices about which servers potentially could be consolidated. Some products take things a
stage further and provide the ability to automatically generate consolidation plans based on
the optimal combination of workloads. Others feature scenario-based modeling capabilities
Appendix.indd 49
4/1/10 4:22:10 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
50
Microsoft Windows Server 2008: The Complete Reference
that enable different consolidation scenarios to be compared and contrasted based on
consolidation ratio, power and space requirements, and TCO.
Leveraging one of these applications can make inventorying and identifying consolidation
candidates a much speedier process. But this class of product only helps when you want to
identify applications and then migrate to a virtual environment.
Products such as those from Acronis, Inc. allow you to migrate your operating system,
applications, configuration files, network settings, and all other data on your system to and
from virtual machines. While some tools, particularly those from virtualization operating
systems vendors, aid you in going from the physical world to a virtual machine, there is
only a very small number of solutions, such as Acronis Backup & Recovery 10, that assist
you in going from one virtual hardware platform to another or from a virtual server to a
physical server.
Analyzing Applications
Because virtualization has only recently moved into the mainstream, many vendors have not yet
extended support to applications running in a virtual environment. Prior to deciding to migrate
an application into a virtual environment, an organization needs to ensure that the application
will still be supported by its manufacturer once it is moved to a virtual environment. If it isn’t,
that, too, must be included in your analysis of whether to move to virtualization.
That said, an organization is well served by considering products that enable both P2V
and V2P migration, such as Acronis Backup & Recovery 10, as an efficient remedy to the
problem. Such applications can move a problematic server quickly and easily to and from a
physical server. This results in the problem being reproduced in a physical environment and
enables the IT manager to obtain support for the troubled application from the user’s vendor.
Consideration should also be given to whether an application:
• Duplicates functionality already present in other applications
• Has become obsolete
• Has only a small number of users
• Is unstable
• Is approaching the end of its supported lifecycle
Organizations also need to consider whether there is merit in devoting resources to
migrating and consolidating an application that fits any of these criteria. In many cases, it might
well be decided that the most economical option is to decommission and replace the application.
In cases where new, high-end hardware will be obtained to be used as the host server
for the virtual machines, you might want to consider repurposing one or more of the older
systems to run these orphaned or soon-to-be-obsolete applications.
Planning the Virtual Infrastructure
Once the physical infrastructure and applications have been inventoried and analyzed, an
organization can begin to plan the virtual infrastructure in greater detail. Consideration
should be given to:
• Virtualization hosts Is scaling up or scaling out the preferred option, or can
existing servers be utilized as hosts?
• Network What changes need to be made in order to ensure that the connectivity
needs of all virtual machines are met?
Appendix.indd 50
4/1/10 4:22:10 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
51
• Performance What enhancements need to be made to the storage and network
subsystems?
• Backup and recovery How will virtualization affect backup and disaster recovery
processes for remote and local systems? What changes should be made?
• Storage area network and network-attached storage What reconfiguration will be
necessary?
Evaluating Potential Solutions
There are an increasing number of server virtualization solutions on the market from such
companies as VMware, Microsoft, Citrix, and Parallels. When evaluating potential solutions,
an organization must conduct extensive research and base its decision on factors including
cost and management and migration capabilities.
Organizations might consider the history of the vendor. Given that deploying virtualization
is a large-scale change, many organizations might wish to entrust their infrastructures to a
company that has an established track record in the field of server virtualization.
Calculating ROI
While virtualization technologies have certainly become far more affordable, migrating to
a virtualized infrastructure is, nonetheless, still an expensive process. In order to establish
whether virtualization provides a realistic and cost-effective solution, organizations must
calculate the ROI. In enterprise-level organizations, ROI calculation will probably be
outsourced to a specialized consultancy firm, but this is an option that most SMBs would
likely find cost-prohibitive. Rather, it will likely be a do-it-yourself proposition.
ROI = [(Payback–- Investment)/Investment)] ∗ 100
To calculate payback, an organization must establish the cost of maintaining the existing
infrastructure and deduct from that the estimated cost of maintaining the virtual infrastructure.
Establishing the cost of maintaining the current infrastructure is clear-cut—estimating the cost
of maintaining a virtual infrastructure is not so straightforward. Consideration should be given
to the potential savings that would result from the following:
• A reduced purchasing rate for new servers
• Reduced electricity consumption (including cooling costs)
• Reduced maintenance and management costs
• The termination of hardware leases
• Reduced (planned and unplanned) downtime
• A reduction in the number of operating system licenses required
• The resale of retired equipment
• A reduction in the amount of (bought or leased) space needed to accommodate servers
Once the payback has been calculated, determining the ROI is a straightforward process.
While the process of planning and deploying virtualization can be challenging, the
benefits of migration can be significant. Organizations are now beginning to use virtualization
technologies to better protect their most valuable asset: the data contained within their servers.
Appendix.indd 51
4/1/10 4:22:10 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
52
Microsoft Windows Server 2008: The Complete Reference
Other benefits include driving down hardware and software licensing costs, reduced utility
bills, simplifying and rationalizing management processes, minimizing expensive downtime,
and improved security of their computer systems.
As your company grows, your needs will change. Remember, the application that you
virtualize today—such as a SQL database—might need to be moved back to a physical
environment later on as the database grows and requires more CPU cycles or priority, or
you may need to move an application back to a physical environment to obtain support for
an issue. When selecting your best-of-breed tools for migrating to a virtual environment,
make sure you select tools for moving back to a physical server or migrating to dissimilar
hardware on a different virtual machine host.
We’ve touched on just some of the issues that deal with server- and storage-based
virtualization. Employing virtual technology requires more than just a shift in how you use
IT infrastructure, but also how you think of your infrastructure. Using virtualization rather
than strictly a physical infrastructure can provide you with a vast array of hardware
possibilities. But with each possibility there is a corresponding challenge. Make sure that
when you are ready to start testing and deploying virtual servers, you have a viable image
of that server that you can restore.
The Acronis Approach
Acronis is a global provider of storage management software that enables corporations and
individuals to move, manage, and maintain digital assets. Acronis sells innovative solutions
for integrated backup, disaster recovery, and virtualization migrations thereby enabling
organizations to maintain business continuity and reduce downtime in computing
environments.
Acronis’ patented disk imaging and disk management technology have won broad
industry acclaim for data and system protection for Windows and Linux systems operating
in mixed physical and virtual environments.
What Is Acronis Backup & Recovery 10?
The award-winning Acronis Backup & Recovery 10 addresses today’s IT needs for integrated
backup and system recovery in a single solution, including must-have capabilities, like data
deduplication and virtual machine protection. These features are critical to protect your
organization’s valuable digital assets at a lower total cost of ownership.
Acronis Backup & Recovery 10 provides organizations with a simple, cost-effective
solution to centrally manage all Windows and Linux systems backup and recovery activities
across mixed physical and virtual environments.
Based on patented disk imaging and bare-metal restore technologies, Acronis Backup
& Recovery 10 captures the entire hard drive contents, including the operating system,
applications, and data, to enable a fully recovered system in minutes, instead of hours or
days. In addition, with the Acronis Backup & Recovery 10 Universal Restore option, it is
easy to restore a backup image to different hardware, including virtual machines.
Acronis Backup & Recovery 10’s policy-based centralized management of all backup
and restore operations for remote and local Windows and Linux systems can be performed
from a single console located anywhere on the network. In addition, centralized dashboards
provide an overview of configured and running operations to enhance operational control
and decision making.
Appendix.indd 52
4/1/10 4:22:11 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
Appendix:
A c ro n i s a n d Vi r t u a l i z a t i o n
53
Customers benefit from:
• Acronis’ award-winning integrated backup and disaster recovery solutions, assuring
customers of our commitment to on-going development and a product roadmap
• Integrated data deduplication∗ to significantly reduce storage and network costs,
lowering total cost of ownership
• Enhanced support for virtual environments, including comprehensive backup and
recovery for VMware ESX/ESXi, vSphere, Microsoft Hyper-V, Citrix Xen Server,
and Parallels virtual environments
• One-stop protection for all of your Windows and Linux machines in both their
virtual and physical forms from a single, central management console
• Superior support for remote and mobile users enabling easier recovery on the road
• Save on license costs with backup and recovery of an unlimited number of VMs on
a single host, lowering cost of ownership
• Acronis Active Restore allows recovering computers to be online during the restore,
enhancing system availability to end users
• Scalability to support a single machine to thousands, supporting your business
growth while lowering total cost of ownership
The Acronis Backup & Recovery 10 product family includes the following solutions.
Acronis Backup & Recovery 10 Advanced Server
Acronis Backup & Recovery 10 provides integrated backup, disaster recovery, and data
deduplication to help IT organizations simplify and automate backup and disaster recovery,
including the ability to manage geographically distributed PCs, laptops, workstations, and
servers.
Eliminating duplicate copies of data from multiple computers during the backup process
results in substantial storage and network cost savings. Acronis’ data deduplication capabilities
provides IT professionals with file- and block-level software deduplication functionality for
both workstations and servers, optimizing IT storage infrastructure and reducing costs.
The benefits of deduplication include:
• Reduce data volumes up to 90 percent, saving valuable storage space.
• Eliminate data deduplication-specific hardware, saving capital expense.
• Reduce network traffic to enhance communication efficiencies.
Acronis Backup & Recovery 10 Advanced Server Virtual Edition
As IT managers take advantage of virtual machines’ greater computing efficiencies, they
need efficient ways to protect all physical and virtual assets and a simpler approach for
migrating physical workloads to virtual. Acronis Backup & Recovery 10 Advanced Server
Virtual Edition meets these needs with:
• Outstanding backup and recovery performance: no agents required on guest machines
• One low price per physical host, protecting an unlimited number of VMs per host
∗Acronis
Appendix.indd 53
Backup & Recovery 10 Deduplication is a separately licensed module.
4/1/10 4:22:11 PM
CompRef8 / Microsft Windows Server 2008: The Complete Reference / Acronis / Appendix
54
Microsoft Windows Server 2008: The Complete Reference
• A fast and safe physical-to-virtual (P2V) and virtual-to-virtual (V2V) migration path
• Policy-based centralized management of backup and recovery across physical and
virtual environments
Acronis Backup & Recovery 10 Server for Windows
Acronis Backup & Recovery 10 Server for Windows simplifies and automates backup and
disaster recovery processes for Windows servers across physical and virtual environments,
minimizing downtime and increasing productivity. It is designed to provide smaller
organizations with a cost-effective solution to locally manage backup and restore activities
on stand-alone Windows servers.
Acronis Backup & Recovery 10 for Microsoft Small Business Server
Acronis Backup & Recovery 10 Advanced Server SBS Edition helps small businesses recover
from disasters and system failures in minutes. It restores a backup image to dissimilar
hardware, including VMs, plus data deduplication to significantly reduce storage costs.
Acronis Backup & Recovery 10 Server for Linux
Backup and disaster recovery requires a new way of thinking to secure and store data in
Windows and Linux systems in physical and virtual environments. Acronis Backup &
Recovery 10 for Linux supports the enterprise and community versions of all common
Linux distributions with local and remote management consoles in a graphical interface.
Our solution provides flexibility for backup and disaster recovery in mixed environments
from a centralized console to optimize processes and administration.
Acronis Backup & Recovery 10 Workstation
Acronis Backup & Recovery 10 Workstation simplifies and automates backup and disaster
recovery processes of Windows desktops and laptops across physical and virtual
environments, minimizing downtime and increasing user productivity. It is designed to
provide small organizations with a cost-effective solution to locally manage backup and
restore activities on stand-alone Windows workstations.
Acronis Recovery for Microsoft Exchange
How long can your business operate without e-mail? For about as long as you can hold
your breath! With Acronis Recovery for Microsoft Exchange, IT managers can implement a
faster backup and recovery solution that allows for a fast database-level backup and recover
individual messages with unparalleled speed.
Acronis Snap Deploy
Acronis Snap Deploy provides comprehensive system deployment using disk-imaging
technology that enables IT professionals to rapidly deploy or restore servers and PCs
quickly and inexpensively.
To find out more about Acronis Backup & Recovery 10 products:
Call +1 877 669-9749
E-mail: sales@acronis.com
Get a free 15-day trial at www.Acronis.com/TryNow
Copyright © 2000-2010 Acronis, Inc. All rights reserved.
Appendix.indd 54
4/1/10 4:22:11 PM
Are you Out on a limb?
Thanks to Acronis...
Swinging between
physical and virtual
has never been easier.
Acronis® Backup & Recovery™ 10
A simple, yet powerful, integrated backup and disaster
recovery solution to ensure business continuity.
Key benefits:
•
•
•
•
•
•
•
•
Complete Windows and Linux system recovery within minutes
Integrated data deduplication to reduce storage and network costs
Enhanced support for all major virtual platforms
Agent-less backup of virtual machines, optimizing performance
Centralized policy-based management console
Scalability to support a few machines to thousands
Acronis Active Restore for better backup success for mobile users
Disk, tape, and network share support
Download a fully functional 15-day trial now at
www.Acronis.com/TryNow
©About Acronis®
Acronis is a leading provider of onsite and offsite backup, disaster recovery and
security solutions. Its patented disk imaging and management technology enables
corporations and individuals to protect digital assets in physical and virtual
environments. Acronis' backup, recovery, server consolidation and virtualization
migration software helps users protect their digital information, maintain business
continuity and reduce downtime. Acronis software is sold in more than 180
countries and available in 13 languages. For additional information, please
visit www.acronis.com. Follow Acronis on Twitter: http://twitter.com/acronis.
Acronis® and the Acronis logo are registered trademarks or trademarks of Acronis Inc.
in the United States and/or other countries.
ee-
$14.99 USA